WorldWideScience

Sample records for sound localization abilities

  1. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  2. Auditory disorders and acquisition of the ability to localize sound in children born to HIV-positive mothers

    Directory of Open Access Journals (Sweden)

    Carla Gentile Matas

    Full Text Available The objective of the present study was to evaluate children born to HIV-infected mothers and to determine whether such children present auditory disorders or poor acquisition of the ability to localize sound. The population studied included 143 children (82 males and 61 females, ranging in age from one month to 30 months. The children were divided into three groups according to the classification system devised in 1994 by the Centers for Disease Control and Prevention: infected; seroreverted; and exposed. The children were then submitted to audiological evaluation, including behavioral audiometry, visual reinforcement audiometry and measurement of acoustic immittance. Statistical analysis showed that the incidence of auditory disorders was significantly higher in the infected group. In the seroreverted and exposed groups, there was a marked absence of auditory disorders. In the infected group as a whole, the findings were suggestive of central auditory disorders. Evolution of the ability to localize sound was found to be poorer among the children in the infected group than among those in the seroreverted and exposed groups.

  3. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  4. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  5. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  6. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  7. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  8. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    Science.gov (United States)

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: improved directional hearing with a bone-conduction device.

    Science.gov (United States)

    Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John

    2012-04-01

    Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.

  10. A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals.

    Directory of Open Access Journals (Sweden)

    Frédéric Gougoux

    2005-02-01

    Full Text Available Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged, the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.

  11. Otite média recorrente e habilidade de localização sonora em pré-escolares Otitis media and sound localization ability in preschool children

    Directory of Open Access Journals (Sweden)

    Aveliny Mantovan Lima-Gregio

    2010-12-01

    Full Text Available OBJETIVO: comparar o desempenho de 40 pré-escolares no teste de localização sonora com as respostas de seus pais para um questionário que investigou a ocorrência de episódios de otite média (OM e os sintomas indicativos de desordens audiológicas e do processamento auditivo. MÉTODOS: após aplicação e análise das respostas do questionário, dois grupos foram formados: GO, com histórico de OM, e GC, sem este histórico. Cada grupo com 20 pré-escolares de ambos os gêneros foi submetido ao teste de localização da fonte sonora em cinco direções (Pereira, 1993. RESULTADOS: a comparação entre GO e GC não mostrou diferença estatisticamente significante (p=1,0000. CONCLUSÃO: as otites recorrentes na primeira infância não influenciaram no desempenho da habilidade de localização sonora dos pré-escolares deste estudo. Embora sejam dois instrumentos baratos e de fácil aplicação, o questionário e o teste de localização não foram suficientes para diferenciar os dois grupos testados.PURPOSE: to compare the sound localization ability of 40 preschool children with their parents' answers. The questionnaire answered by the parents investigated otitis media (OM episodes and symptoms that indicated the audiological and auditory processing disabilities. METHODS: after applying and analyzing the questionnaire's answers, two groups were formed: OG (with OM and CG (control group. Each group with 20 preschool children, of both genders, was submitted to the sound localization test in five directions (according to Pereira, 1993. RESULTS: comparison between OG and CG did not reveal statistically significant difference (p=1.0000. CONCLUSION: OM episodes during first infancy did not influence the sound localization ability in this preschool children study. Although both used evaluation instruments (questionnaire and sound localization test are cheap and easy to apply they are not sufficient to differ both tested groups.

  12. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats

    Science.gov (United States)

    Luo, Jinhong; Koselj, Klemen; Zsebők, Sándor; Siemers, Björn M.; Goerlitz, Holger R.

    2014-01-01

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey. PMID:24335559

  13. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats.

    Science.gov (United States)

    Luo, Jinhong; Koselj, Klemen; Zsebok, Sándor; Siemers, Björn M; Goerlitz, Holger R

    2014-02-06

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey.

  14. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  15. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  16. Performances of Student Activism: Sound, Silence, Gender, and Dis/ability

    Science.gov (United States)

    Pasque, Penny A.; Vargas, Juanita Gamez

    2014-01-01

    This chapter explores the various performances of activism by students through sound, silence, gender, and dis/ability and how these performances connect to social change efforts around issues such as human trafficking, homeless children, hunger, and children with varying abilities.

  17. Mice Lacking the Alpha9 Subunit of the Nicotinic Acetylcholine Receptor Exhibit Deficits in Frequency Difference Limens and Sound Localization

    Directory of Open Access Journals (Sweden)

    Amanda Clause

    2017-06-01

    Full Text Available Sound processing in the cochlea is modulated by cholinergic efferent axons arising from medial olivocochlear neurons in the brainstem. These axons contact outer hair cells in the mature cochlea and inner hair cells during development and activate nicotinic acetylcholine receptors composed of α9 and α10 subunits. The α9 subunit is necessary for mediating the effects of acetylcholine on hair cells as genetic deletion of the α9 subunit results in functional cholinergic de-efferentation of the cochlea. Cholinergic modulation of spontaneous cochlear activity before hearing onset is important for the maturation of central auditory circuits. In α9KO mice, the developmental refinement of inhibitory afferents to the lateral superior olive is disturbed, resulting in decreased tonotopic organization of this sound localization nucleus. In this study, we used behavioral tests to investigate whether the circuit anomalies in α9KO mice correlate with sound localization or sound frequency processing. Using a conditioned lick suppression task to measure sound localization, we found that three out of four α9KO mice showed impaired minimum audible angles. Using a prepulse inhibition of the acoustic startle response paradigm, we found that the ability of α9KO mice to detect sound frequency changes was impaired, whereas their ability to detect sound intensity changes was not. These results demonstrate that cholinergic, nicotinic α9 subunit mediated transmission in the developing cochlear plays an important role in the maturation of hearing.

  18. The effect of multimicrophone noise reduction systems on sound source localization by users of binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2008-07-01

    This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.

  19. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  20. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    Science.gov (United States)

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  1. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba as demonstrated by virtual ruff removal.

    Directory of Open Access Journals (Sweden)

    Laura Hausmann

    Full Text Available BACKGROUND: When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs, which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD, interaural intensity differences (ILD, and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. METHODOLOGY/PRINCIPAL FINDINGS: HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. CONCLUSIONS/SIGNIFICANCE: The facial ruff a improves azimuthal sound localization by increasing the ITD range and b improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the

  2. Sound localization in the presence of one or two distracters

    NARCIS (Netherlands)

    Langendijk, E.H.A.; Kistler, D.J.; Wightman, F.L

    2001-01-01

    Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were

  3. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, L Duc; Havinga, Paul J.M.

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and

  4. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  5. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  6. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  7. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  8. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  9. Spherical loudspeaker array for local active control of sound.

    Science.gov (United States)

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  10. The natural history of sound localization in mammals--a story of neuronal inhibition.

    Science.gov (United States)

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  11. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  12. Improvement of directionality and sound-localization by internal ear coupling in barn owls

    DEFF Research Database (Denmark)

    Wagner, Hermann; Christensen-Dalsgaard, Jakob; Kettler, Lutz

    Mark Konishi was one of the first to quantify sound-localization capabilities in barn owls. He showed that frequencies between 3 and 10 kHz underlie precise sound localization in these birds, and that they derive spatial information from processing interaural time and interaural level differences....... However, despite intensive research during the last 40 years it is still unclear whether and how internal ear coupling contributes to sound localization in the barn owl. Here we investigated ear directionality in anesthetized birds with the help of laser vibrometry. Care was taken that anesthesia...... time difference in the low-frequency range, barn owls hesitate to approach prey or turn their heads when only low-frequency auditory information is present in a stimulus they receive. Thus, the barn-owl's sound localization system seems to be adapted to work best in frequency ranges where interaural...

  13. The natural history of sound localization in mammals – a story of neuronal inhibition

    Directory of Open Access Journals (Sweden)

    Benedikt eGrothe

    2014-10-01

    Full Text Available Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  14. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  15. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    Science.gov (United States)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  16. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  17. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  18. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  19. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  20. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    Science.gov (United States)

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  1. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  2. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  3. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  4. How to generate a sound-localization map in fish

    Science.gov (United States)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  5. Effects of temperature on sound production and auditory abilities in the Striped Raphael catfish Platydoras armatulus (Family Doradidae.

    Directory of Open Access Journals (Sweden)

    Sandra Papes

    Full Text Available Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3-5 ms. The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in

  6. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  7. Three-year experience with the Sophono in children with congenital conductive unilateral hearing loss: tolerability, audiometry, and sound localization compared to a bone-anchored hearing aid.

    Science.gov (United States)

    Nelissen, Rik C; Agterberg, Martijn J H; Hol, Myrthe K S; Snik, Ad F M

    2016-10-01

    Bone conduction devices (BCDs) are advocated as an amplification option for patients with congenital conductive unilateral hearing loss (UHL), while other treatment options could also be considered. The current study compared a transcutaneous BCD (Sophono) with a percutaneous BCD (bone-anchored hearing aid, BAHA) in 12 children with congenital conductive UHL. Tolerability, audiometry, and sound localization abilities with both types of BCD were studied retrospectively. The mean follow-up was 3.6 years for the Sophono users (n = 6) and 4.7 years for the BAHA users (n = 6). In each group, two patients had stopped using their BCD. Tolerability was favorable for the Sophono. Aided thresholds with the Sophono were unsatisfactory, as they did not reach under a mean pure tone average of 30 dB HL. Sound localization generally improved with both the Sophono and the BAHA, although localization abilities did not reach the level of normal hearing children. These findings, together with previously reported outcomes, are important to take into account when counseling patients and their caretakers. The selection of a suitable amplification option should always be made deliberately and on individual basis for each patient in this diverse group of children with congenital conductive UHL.

  8. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  9. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  10. Evaluation of swallowing ability using swallowing sounds in maxillectomy patients.

    Science.gov (United States)

    Kamiyanagi, A; Sumita, Y; Ino, S; Chikai, M; Nakane, A; Tohara, H; Minakuchi, S; Seki, Y; Endo, H; Taniguchi, H

    2018-02-01

    Maxillectomy for oral tumours often results in debilitating oral hypofunction, which markedly decreases quality of life. Dysphagia, in particular, is one of the most serious problems following maxillectomy. This study used swallowing sounds as a simple evaluation method to evaluate swallowing ability in maxillectomy patients with and without their obturator prosthesis placed. Twenty-seven maxillectomy patients (15 men, 12 women; mean age 66.0 ± 12.1 years) and 30 healthy controls (14 men, 16 women; mean age 44.9 ± 21.3 years) were recruited for this study. Participants were asked to swallow 4 mL of water, and swallowing sounds were recorded using a throat microphone. Duration of the acoustic signal and duration of peak intensity (DPI) were measured. Duration of peak intensity was significantly longer in maxillectomy patients without their obturator than with it (P maxillectomy patients without their obturator than in healthy controls (P maxillectomy patients who had undergone soft palate resection than in those who had not (P maxillectomy patients could be improved by wearing an obturator prosthesis, particularly during the oral stage. However, it is difficult to improve the oral stage of swallowing in patients who have undergone soft palate resection even with obturator placement. © 2017 John Wiley & Sons Ltd.

  11. Localization of self-generated synthetic footstep sounds on different walked-upon materials through headphones

    DEFF Research Database (Denmark)

    Turchet, Luca; Spagnol, Simone; Geronazzo, Michele

    2016-01-01

    typologies of surface materials: solid (e.g., wood) and aggregate (e.g., gravel). Different sound delivery methods (mono, stereo, binaural) as well as several surface materials, in presence or absence of concurrent contextual auditory information provided as soundscapes, were evaluated in a vertical...... localization task. Results showed that solid surfaces were localized significantly farther from the walker's feet than the aggregate ones. This effect was independent of the used rendering technique, of the presence of soundscapes, and of merely temporal or spectral attributes of sound. The effect...

  12. Localization of Simultaneous Moving Sound Sources for Mobile Robot Using a Frequency-Domain Steered Beamformer Approach

    OpenAIRE

    Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean

    2016-01-01

    Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...

  13. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  14. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound

    Science.gov (United States)

    Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%–100%) and a recall of 93.9% (range: 72.7%–100%) for the 71 episodes of dry swallows. PMID:27170905

  15. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound.

    Science.gov (United States)

    Jayatilake, Dushyantha; Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%-100%) and a recall of 93.9% (range: 72.7%-100%) for the 71 episodes of dry swallows.

  16. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  17. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    Science.gov (United States)

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  18. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  19. Sound localization in noise in hearing-impaired listeners.

    Science.gov (United States)

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  20. Crowing Sound Analysis of Gaga' Chicken; Local Chicken from South Sulawesi Indonesia

    OpenAIRE

    Aprilita Bugiwati, Sri Rachma; Ashari, Fachri

    2008-01-01

    Gaga??? chicken was known as a local chicken at South Sulawesi Indonesia which has unique, specific, and different crowing sound, especially at the ending of crowing sound which is like the voice character of human laughing, comparing with the other types of singing chicken in the world. 287 birds of Gaga??? chicken at 3 districts at the centre habitat of Gaga??? chicken were separated into 2 groups (163 birds of Dangdut type and 124 birds of Slow type) which is based on the speed...

  1. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    Science.gov (United States)

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  2. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    Directory of Open Access Journals (Sweden)

    Youngwoong Kim

    2015-11-01

    Full Text Available The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body.

  3. Does ability to establish sound-symbol pairings mediate the RAN reading relationship?

    DEFF Research Database (Denmark)

    Poulsen, Mads; Juul, Holger; Elbro, Carsten

    for animals in a paired associate learning task. These animals were then used in a rapid naming task. Results Preliminary results show that reading correlated with the amount of training required for learning the animal names (r=-.19, p=.06). RAN speed with the same animals did not correlate with reading......Performance on tests to rapidly name letters and digits has been shown to correlate with reading. One possible reason is that these tests probe the ability to learn and automatise symbol-sound associations. However, most studies have not controlled for the amount of experience with the RAN......-items, so it is unclear whether it is the experience or the ability to take advantage of the experience that is responsible for the correlation between RAN and reading. Paired associate learning tasks have been shown to differentiate dyslexics from controls, and to correlate with reading in unselected...

  4. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  5. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  6. ICE on the road to auditory sensitivity reduction and sound localization in the frog.

    Science.gov (United States)

    Narins, Peter M

    2016-10-01

    Frogs and toads are capable of producing calls at potentially damaging levels that exceed 110 dB SPL at 50 cm. Most frog species have internally coupled ears (ICE) in which the tympanic membranes (TyMs) communicate directly via the large, permanently open Eustachian tubes, resulting in an inherently directional asymmetrical pressure-difference receiver. One active mechanism for auditory sensitivity reduction involves the pressure increase during vocalization that distends the TyM, reducing its low-frequency airborne sound sensitivity. Moreover, if sounds generated by the vocal folds arrive at both surfaces of the TyM with nearly equal amplitudes and phases, the net motion of the eardrum would be greatly attenuated. Both of these processes appear to reduce the motion of the frog's TyM during vocalizations. The implications of ICE in amphibians with respect to sound localizations are discussed, and the particularly interesting case of frogs that use ultrasound for communication yet exhibit exquisitely small localization jump errors is brought to light.

  7. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  8. Do you hear where I hear?: Isolating the individualized sound localization cues.

    Directory of Open Access Journals (Sweden)

    Griffin David Romigh

    2014-12-01

    Full Text Available It is widely acknowledged that individualized head-related transfer function (HRTF measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250-ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.

  9. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  10. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  11. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  12. Hearing abilities and sound reception of broadband sounds in an adult Risso's dolphin (Grampus griseus).

    Science.gov (United States)

    Mooney, T Aran; Yang, Wei-Cheng; Yu, Hsin-Yi; Ketten, Darlene R; Jen, I-Fan

    2015-08-01

    While odontocetes do not have an external pinna that guides sound to the middle ear, they are considered to receive sound through specialized regions of the head and lower jaw. Yet odontocetes differ in the shape of the lower jaw suggesting that hearing pathways may vary between species, potentially influencing hearing directionality and noise impacts. This work measured the audiogram and received sensitivity of a Risso's dolphin (Grampus griseus) in an effort to comparatively examine how this species receives sound. Jaw hearing thresholds were lowest (most sensitive) at two locations along the anterior, midline region of the lower jaw (the lower jaw tip and anterior part of the throat). Responses were similarly low along a more posterior region of the lower mandible, considered the area of best hearing in bottlenose dolphins. Left- and right-side differences were also noted suggesting possible left-right asymmetries in sound reception or differences in ear sensitivities. The results indicate best hearing pathways may vary between the Risso's dolphin and other odontocetes measured. This animal received sound well, supporting a proposed throat pathway. For Risso's dolphins in particular, good ventral hearing would support their acoustic ecology by facilitating echo-detection from their proposed downward oriented echolocation beam.

  13. Development of the sound localization cues in cats

    Science.gov (United States)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies 10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  14. Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem

    DEFF Research Database (Denmark)

    Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E

    2017-01-01

    The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....

  15. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    Localizing sounds with different frequency and time domain characteristics in a dynamic listening environment is a challenging task that has not been explored in the field of robotics as much as other perceptual tasks...

  16. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Kotaro Hoshiba

    2017-11-01

    Full Text Available In search and rescue activities, unmanned aerial vehicles (UAV should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  17. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  18. Digital sound de-localisation as a game mechanic for novel bodily play

    DEFF Research Database (Denmark)

    Tiab, John; Rantakari, Juho; Halse, Mads Laurberg

    2016-01-01

    This paper describes an exertion gameplay mechanic involving player's partial control of their opponent's sound localization abilities. We developed this concept through designing and testing "The Boy and The Wolf" game. In this game, we combined deprivation of sight with a positional disparity...... between player bodily movement and sound. This facilitated intense gameplay supporting player creativity and spectator engagement. We use our observations and analysis of our game to offer a set of lessons learnt for designing engaging bodily play using disparity between sound and movement. Moreover, we...... describe our intended future explorations of this area....

  19. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  20. Towards a Synesthesia Laboratory: Real-time Localization and Visualization of a Sound Source for Virtual Reality Applications

    OpenAIRE

    Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina

    2018-01-01

    In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...

  1. Mutation in the kv3.3 voltage-gated potassium channel causing spinocerebellar ataxia 13 disrupts sound-localization mechanisms.

    Directory of Open Access Journals (Sweden)

    John C Middlebrooks

    Full Text Available Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13. This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.

  2. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    sound recordings could be used to identify sites of local airway inflammation. Keywords: airway obstruction, expiration sound pressure level, inspiration sound pressure level, expiration-to-inspiration sound pressure ratio, 7-point analysis

  3. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  4. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  5. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  6. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    Science.gov (United States)

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  7. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  8. A novel method for direct localized sound speed measurement using the virtual source paradigm

    DEFF Research Database (Denmark)

    Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt

    2007-01-01

    ) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...

  9. Narrative Ability of Children With Speech Sound Disorders and the Prediction of Later Literacy Skills

    Science.gov (United States)

    Wellman, Rachel L.; Lewis, Barbara A.; Freebairn, Lisa A.; Avrich, Allison A.; Hansen, Amy J.; Stein, Catherine M.

    2012-01-01

    Purpose The main purpose of this study was to examine how children with isolated speech sound disorders (SSDs; n = 20), children with combined SSDs and language impairment (LI; n = 20), and typically developing children (n = 20), ages 3;3 (years;months) to 6;6, differ in narrative ability. The second purpose was to determine if early narrative ability predicts school-age (8–12 years) literacy skills. Method This study employed a longitudinal cohort design. The children completed a narrative retelling task before their formal literacy instruction began. The narratives were analyzed and compared for group differences. Performance on these early narratives was then used to predict the children’s reading decoding, reading comprehension, and written language ability at school age. Results Significant group differences were found in children’s (a) ability to answer questions about the story, (b) use of story grammars, and (c) number of correct and irrelevant utterances. Regression analysis demonstrated that measures of story structure and accuracy were the best predictors of the decoding of real words, reading comprehension, and written language. Measures of syntax and lexical diversity were the best predictors of the decoding of nonsense words. Conclusion Combined SSDs and LI, and not isolated SSDs, impact a child’s narrative abilities. Narrative retelling is a useful task for predicting which children may be at risk for later literacy problems. PMID:21969531

  10. Perception of environmental sounds by experienced cochlear implant patients

    Science.gov (United States)

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Objectives Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli, may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Design Seventeen experienced postlingually-deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception, and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern and temporal order for tones tests) and a backward digit recall test. Results The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants and r = 0.48 for vowels. HINT and

  11. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  12. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  13. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  14. Effects of Active and Passive Hearing Protection Devices on Sound Source Localization, Speech Recognition, and Tone Detection.

    Directory of Open Access Journals (Sweden)

    Andrew D Brown

    Full Text Available Hearing protection devices (HPDs such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement environments where auditory situational awareness is imperative. Here we assessed (1 sound source localization accuracy using a head-turning paradigm, (2 speech-in-noise recognition using a modified version of the QuickSIN test, and (3 tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive, including two new and previously untested devices. Relative to unoccluded (control performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.

  15. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    Science.gov (United States)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  16. On the influence of microphone array geometry on HRTF-based Sound Source Localization

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...

  17. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    Science.gov (United States)

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  18. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  19. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A "looming bias" in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization.

    Science.gov (United States)

    McCarthy, Lisa; Olsen, Kirk N

    2017-01-01

    Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.

  1. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  2. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  3. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  4. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  5. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  6. Affording and Constraining Local Moral Orders in Teacher-Led Ability-Based Mathematics Groups

    Science.gov (United States)

    Tait-McCutcheon, Sandi; Shuker, Mary Jane; Higgins, Joanna; Loveridge, Judith

    2015-01-01

    How teachers position themselves and their students can influence the development of afforded or constrained local moral orders in ability-based teacher-led mathematics lessons. Local moral orders are the negotiated discursive practices and interactions of participants in the group. In this article, the developing local moral orders of 12 teachers…

  7. Panels with low-Q-factor resonators with theoretically infinite sound-proofing ability at a single frequency

    Science.gov (United States)

    Lazarev, L. A.

    2015-07-01

    An infinite panel with two types of resonators regularly installed on it is theoretically considered. Each resonator is an air-filled cavity hermetically closed by a plate, which executes piston vibrations. The plate and air inside the cavity play the roles of mass and elasticity, respectively. Every other resonator is reversed. At a certain ratio between the parameters of the resonators at the tuning frequency of the entire system, the acoustic-pressure force that directly affects the panel can be fully compensated by the action forces of the resonators. In this case, the sound-proofing ability (transmission loss) tends to infinity. The presented calculations show that a complete transmission-loss effect can be achieved even with low- Q resonators.

  8. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  9. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    Science.gov (United States)

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  10. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  11. PULSAR.MAKING VISIBLE THE SOUND OF STARS

    OpenAIRE

    Lega, Ferran

    2015-01-01

    [EN] Pulsar, making visible the sound of stars is a comunication based on a sound Installation raised as a site-specific project to show the hidden abilities of sound to generate images and patterns on the matter, using the acoustic science of cymatics. The objective of this communication will show people how through abstract and intangible sounds from celestial orbs of cosmos (radio waves generated by electromagnetic pulses from the rotation of neutrón stars), we can create ar...

  12. Enhanced Soundings for Local Coupling Studies Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Craig R [University at Albany, State University of New York; Santanello, Joseph A [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Gentine, Pierre [Columbia Univ., New York, NY (United States)

    2016-04-01

    This document presents initial analyses of the enhanced radiosonde observations obtained during the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Enhanced Soundings for Local Coupling Studies Field Campaign (ESLCS), which took place at the ARM Southern Great Plains (SGP) Central Facility (CF) from June 15 to August 31, 2015. During ESLCS, routine 4-times-daily radiosonde measurements at the ARM-SGP CF were augmented on 12 days (June 18 and 29; July 11, 14, 19, and 26; August 15, 16, 21, 25, 26, and 27) with daytime 1-hourly radiosondes and 10-minute ‘trailer’ radiosondes every 3 hours. These 12 intensive operational period (IOP) days were selected on the basis of prior-day qualitative forecasts of potential land-atmosphere coupling strength. The campaign captured 2 dry soil convection advantage days (June 29 and July 14) and 10 atmospherically controlled days. Other noteworthy IOP events include: 2 soil dry-down sequences (July 11-14-19 and August 21-25-26), a 2-day clear-sky case (August 15-16), and the passing of Tropical Storm Bill (June 18). To date, the ESLCS data set constitutes the highest-temporal-resolution sampling of the evolution of the daytime planetary boundary layer (PBL) using radiosondes at the ARM-SGP. The data set is expected to contribute to: 1) improved understanding and modeling of the diurnal evolution of the PBL, particularly with regard to the role of local soil wetness, and (2) new insights into the appropriateness of current ARM-SGP CF thermodynamic sampling strategies.

  13. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    Science.gov (United States)

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  14. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  15. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  16. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  17. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  18. Local Mechanisms for Loud Sound-Enhanced Aminoglycoside Entry into Outer Hair Cells

    Directory of Open Access Journals (Sweden)

    Hongzhe eLi

    2015-04-01

    Full Text Available Loud sound exposure exacerbates aminoglycoside ototoxicity, increasing the risk of permanent hearing loss and degrading the quality of life in affected individuals. We previously reported that loud sound exposure induces temporary threshold shifts (TTS and enhances uptake of aminoglycosides, like gentamicin, by cochlear outer hair cells (OHCs. Here, we explore mechanisms by which loud sound exposure and TTS could increase aminoglycoside uptake by OHCs that may underlie this form of ototoxic synergy.Mice were exposed to loud sound levels to induce TTS, and received fluorescently-tagged gentamicin (GTTR for 30 minutes prior to fixation. The degree of TTS was assessed by comparing auditory brainstem responses before and after loud sound exposure. The number of tip links, which gate the GTTR-permeant mechanoelectrical transducer (MET channels, was determined in OHC bundles, with or without exposure to loud sound, using scanning electron microscopy.We found wide-band noise (WBN levels that induce TTS also enhance OHC uptake of GTTR compared to OHCs in control cochleae. In cochlear regions with TTS, the increase in OHC uptake of GTTR was significantly greater than in adjacent pillar cells. In control mice, we identified stereociliary tip links at ~50% of potential positions in OHC bundles. However, the number of OHC tip links was significantly reduced in mice that received WBN at levels capable of inducing TTS.These data suggest that GTTR uptake by OHCs during TTS occurs by increased permeation of surviving, mechanically-gated MET channels, and/or non-MET aminoglycoside-permeant channels activated following loud sound exposure. Loss of tip links would hyperpolarize hair cells and potentially increase drug uptake via aminoglycoside-permeant channels expressed by hair cells. The effect of TTS on aminoglycoside-permeant channel kinetics will shed new light on the mechanisms of loud sound-enhanced aminoglycoside uptake, and consequently on ototoxic

  19. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  20. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  1. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  2. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  3. Hearing illusory sounds in noise: sensory-perceptual transformations in primary auditory cortex.

    NARCIS (Netherlands)

    Riecke, L.; Opstal, A.J. van; Goebel, R.; Formisano, E.

    2007-01-01

    A sound that is interrupted by silence is perceived as discontinuous. However, when the silence is replaced by noise, the target sound may be heard as uninterrupted. Understanding the neural basis of this continuity illusion may elucidate the ability to track sounds of interest in noisy auditory

  4. Acoustic metamaterials capable of both sound insulation and energy harvesting

    Science.gov (United States)

    Li, Junfei; Zhou, Xiaoming; Huang, Guoliang; Hu, Gengkai

    2016-04-01

    Membrane-type acoustic metamaterials are well known for low-frequency sound insulation. In this work, by introducing a flexible piezoelectric patch, we propose sound-insulation metamaterials with the ability of energy harvesting from sound waves. The dual functionality of the metamaterial device has been verified by experimental results, which show an over 20 dB sound transmission loss and a maximum energy conversion efficiency up to 15.3% simultaneously. This novel property makes the metamaterial device more suitable for noise control applications.

  5. Acoustic metamaterials capable of both sound insulation and energy harvesting

    International Nuclear Information System (INIS)

    Li, Junfei; Zhou, Xiaoming; Hu, Gengkai; Huang, Guoliang

    2016-01-01

    Membrane-type acoustic metamaterials are well known for low-frequency sound insulation. In this work, by introducing a flexible piezoelectric patch, we propose sound-insulation metamaterials with the ability of energy harvesting from sound waves. The dual functionality of the metamaterial device has been verified by experimental results, which show an over 20 dB sound transmission loss and a maximum energy conversion efficiency up to 15.3% simultaneously. This novel property makes the metamaterial device more suitable for noise control applications. (paper)

  6. Item bias in self-reported functional ability among 75-year-old men and women in three Nordic localities

    DEFF Research Database (Denmark)

    Avlund, K; Era, P; Davidsen, M

    1996-01-01

    to geographical locality and gender. Information about self-reported functional ability was gathered from surveys on 75-year-old men and women in Glostrup (Denmark), Göteborg (Sweden) and Jyväskylä (Finland). The data were collected by structured home interviews about mobility and Physical activities of daily......The purpose of this article is to analyse item bias in a measure of self-reported functional ability among 75-year-old people in three Nordic localities. The present item bias analysis examines whether the construction of a functional ability index from several variables results in bias in relation...... living (PADL) in relation to tiredness, reduced speed and dependency and combined into three tiredness-scales, three reduced speed-scales and two dependency-scales. The analysis revealed item bias regarding geographical locality in seven out of eight of the functional ability scales, but nearly no bias...

  7. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  8. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  9. Genetics and genomics of musical abilities

    OpenAIRE

    Oikkonen, Jaana

    2016-01-01

    Most people have the capacity for music perception and production, but the degree of music competency varies between individuals. In this thesis, I studied abilities to identify pitch, tone duration and sound patterns with Karma s test for auditory structuring (KMT), and Seashore s tests for time (ST) and pitch (SP). These abilities can be considered as basic components of musicality. Additionally, I studied self-reported musical activities, especially composing and arranging. Musical ability...

  10. Propagation of Sound in a Bose-Einstein Condensate

    International Nuclear Information System (INIS)

    Andrews, M.R.; Kurn, D.M.; Miesner, H.; Durfee, D.S.; Townsend, C.G.; Inouye, S.; Ketterle, W.

    1997-01-01

    Sound propagation has been studied in a magnetically trapped dilute Bose-Einstein condensate. Localized excitations were induced by suddenly modifying the trapping potential using the optical dipole force of a focused laser beam. The resulting propagation of sound was observed using a novel technique, rapid sequencing of nondestructive phase-contrast images. The speed of sound was determined as a function of density and found to be consistent with Bogoliubov theory. This method may generally be used to observe high-lying modes and perhaps second sound. copyright 1997 The American Physical Society

  11. managerial ability and farming success : an analysis of small ...

    African Journals Online (AJOL)

    This research and analysis drew from the field of Industrial Psychology to determine and ... capital and sound financial structure within farmers' business, market access, ... managerial ability and strong entrepreneurial instinct; ability to handle.

  12. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  13. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  14. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  16. Measuring Young Children's Alphabet Knowledge: Development and Validation of Brief Letter-Sound Knowledge Assessments

    Science.gov (United States)

    Piasta, Shayne B.; Phillips, Beth M.; Williams, Jeffrey M.; Bowles, Ryan P.; Anthony, Jason L.

    2016-01-01

    Early childhood teachers are increasingly encouraged to support children's development of letter-sound abilities. Assessment of letter-sound knowledge is key in planning for effective instruction, yet the letter-sound knowledge assessments currently available and suitable for preschool-age children demonstrate significant limitations. The purpose…

  17. Localização sonora em usuários de aparelhos de amplificação sonora individual Sound localization by hearing aid users

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rodrigues

    2010-06-01

    Full Text Available OBJETIVO: comparar o desempenho, no teste de localização de fontes sonoras, de usuários de aparelhos de amplificação sonora individual (AASI do tipo retroauricular e intracanal, com o desempenho de ouvintes normais, nos planos espaciais horizontal e sagital mediano, para as frequências de 500, 2.000 e 4.500 Hz; e correlacionar os acertos no teste de localização sonora com o tempo de uso dos AASI. MÉTODOS: foram testados oito ouvintes normais e 20 usuários de próteses auditivas, subdivididos em dois grupos. Um formado por 10 indivíduos usuários de próteses auditivas do tipo intracanal e o outro grupo formado por 10 usuários de próteses auditivas do tipo retroauricular. Todos foram submetidos ao teste de localização de fontes sonoras, no qual foram apresentados, aleatoriamente, três tipos de ondas quadradas, com frequências fundamentais em 0,5 kHz, 2 kHz e 4,5 kHz, na intensidade de 70 dBA. RESULTADOS: encontrou-se percentuais de acertos médios de 78,4%, 72,2% e 72,9% para os ouvintes normais, em 0,5 kHz, 2 kHz e 4,5 kHz, respectivamente e 40,1%, 39,4% e 41,7% para os usuários de aparelho de amplificação sonora individual. Quanto aos tipos de aparelhos, os usuários do modelo intracanal acertaram a origem da fonte sonora em 47,2% das vezes e os usuários do modelo retroauricular em 37,4% das vezes. Não foi observada correlação entre o percentual de acertos no teste de localização sonora e o tempo de uso da prótese auditiva. CONCLUSÃO: ouvintes normais localizam as fontes sonoras de maneira mais eficiente que os usuários de aparelho de amplificação sonora individual e, dentre estes, os que utilizam o modelo intracanal obtiveram melhor desempenho. Além disso, o tempo de uso não interferiu no desempenho para localizar a origem das fontes sonoras.PURPOSE: to compare the sound localization performance of hearing aids users, with the performance of normal hearing in the horizontal and sagittal planes, at 0.5, 2 and 4

  18. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  19. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  20. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  1. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    Science.gov (United States)

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  2. Tool-use-associated sound in the evolution of language.

    Science.gov (United States)

    Larsson, Matz

    2015-09-01

    Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.

  3. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  4. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  5. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  6. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  7. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  8. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  9. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    OpenAIRE

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation ...

  10. Recognition and characterization of unstructured environmental sounds

    Science.gov (United States)

    Chu, Selina

    2011-12-01

    Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply

  11. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  12. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. [Effect of early scream sound stress on learning and memory in female rats].

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Huang, Chen

    2015-12-01

    To investigate the effect of early scream sound stress on the ability of spatial learning and memory, the levels of norepinephrine (NE) and corticosterone (CORT) in serum, and the morphology of adrenal gland.
 Female Sprague-Dawley (SD) rats were treated daily with scream sound from postnatal day 1(P1) for 21 d. Morris water maze was used to measure the spatial learning and memory ability. The levels of serum NE and CORT were determined by radioimmunoassay. Adrenal gland of SD rats was collected and fixed in formalin, and then embedded with paraffin. The morphology of adrenal gland was observed by HE staining.
 Exposure to early scream sound decreased latency of escape and increased times to cross the platform in Morris water maze test (Psound stress can enhance spatial learning and memory ability in adulthood, which is related to activation of the hypothalamo-pituitary-adrenal axis and sympathetic nervous system.

  15. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  16. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  17. A Study of Relationship between the Acoustic Sensitivity of Vestibular System and the Ability to Trigger Sound-Evoked Muscle Reflex of the Middle Ear in Adults with Normal Hearing

    Directory of Open Access Journals (Sweden)

    S.F. Emami

    2014-07-01

    Full Text Available Introduction & Objective: The vestibular system is sound sensitive and the sensitivity is related to the saccule. The vestibular afferents are projected to the middle ear muscles (such as the stapedius. The goal of this research was studying the relationship between the vestibular hearing and the sound-evoked muscle reflex of the middle ear to 500 HZ. Materials & Methods: This study was a cross sectional-comparison done in audiology department of Sheikholreis C‍‍linic (Hamadan, Iran. The study groups consisted of thirty healthy people and thirty patients with benign paroxysmal positional vertigo. Inclusion criteria of the present study were to have normal hearing on pure tone audiometry, acoustic reflex, and speech discrimination scores. Based on ipsilateral acoustic reflex test at 500HZ, they were divided to normal and abnormal groups. Then they were evaluated by cervical vestibular evoked myogenic potentials (cVEMPs and finally classified in three groups (N Normal ear , (CVUA Contra lateral vertiginous ear with unaffected saccular sensitivity to sound,(IVA Ipsilateral vertiginous ear with affected saccular sensitivity to sound. Results: Thirty affected ears (IVA with decreased vestibular excitability as detected by ab-normal cVEMPs, revealed abnormal findings of acoustic reflex at 500HZ. Whereas, both un-affected (CVUA and normal ears (N had normal results. Multiple comparisons of mean values of cVEMPs (p13,n23 and acoustic reflex at500HZ among the three groups were sig-nificant. The correlation between acoustic reflex at 500HZ and p13 latencies was significant. The n23 latencies showed significant correlation with acoustic reflex at 500HZ. Conclusion: The vestibular sensitivity to sound retains the ability to trigger sound-evoked re-flex of the middle ear at 500 HZ. (Sci J Hamadan Univ Med Sci 2014; 21 (2:99-104

  18. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  19. Sound analysis of a cup drum

    International Nuclear Information System (INIS)

    Kim, Kun ho

    2012-01-01

    The International Young Physicists’ Tournament (IYPT) is a worldwide tournament that evaluates a high-school student's ability to solve various physics conundrums that have not been fully resolved in the past. The research presented here is my solution to the cup drum problem. The physics behind a cup drum has never been explored or modelled. A cup drum is a musical instrument that can generate different frequencies and amplitudes depending on the location of a cup held upside-down over, on or under a water surface. The tapping sound of a cup drum can be divided into two components: standing waves and plate vibration. By individually researching the nature of these two sounds, I arrived at conclusions that could accurately predict the frequencies in most cases. When the drum is very close to the surface, qualitative explanations are given. In addition, I examined the trend of the tapping sound amplitude at various distances and qualitatively explained the experimental results. (paper)

  20. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  1. Audibility of individual reflections in a complete sound field, III

    DEFF Research Database (Denmark)

    Bech, Søren

    1996-01-01

    This paper reports on the influence of individual reflections on the auditory localization of a loudspeaker in a small room. The sound field produced by a single loudspeaker positioned in a normal listening room has been simulated using an electroacoustic setup. The setup models the direct sound......-independent absorption coefficients of the room surfaces, and (2) a loudspeaker with directivity according to a standard two-way system and absorption coefficients according to real materials. The results have shown that subjects can distinguish reliably between timbre and localization, that the spectrum level above 2 k...

  2. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  3. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  4. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  5. The perceptual basis of spatial sound perception

    NARCIS (Netherlands)

    Kohlrausch, A.G.

    2003-01-01

    Our ability to derive spatial impressions from a sound field is based on the facts that we have two sensors which are spatially separated by typically 18 cm and that the space in between these sensors is filled by acoustically nontransparant material. The first fact leads to a time difference at the

  6. Decoding the neural signatures of emotions expressed through sound.

    Science.gov (United States)

    Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T

    2018-03-01

    Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  8. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    Science.gov (United States)

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  9. Speech abilities in preschool children with speech sound disorder with and without co-occurring language impairment.

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A

    2014-10-01

    The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.

  10. Effect of Sound Waves on Decarburization Rate of Fe-C Melt

    Science.gov (United States)

    Komarov, Sergey V.; Sano, Masamichi

    2018-02-01

    Sound waves have the ability to propagate through a gas phase and, thus, to supply the acoustic energy from a sound generator to materials being processed. This offers an attractive tool, for example, for controlling the rates of interfacial reactions in steelmaking processes. This study investigates the kinetics of decarburization in molten Fe-C alloys, the surface of which was exposed to sound waves and Ar-O2 gas blown onto the melt surface. The main emphasis is placed on clarifying effects of sound frequency, sound pressure, and gas flow rate. A series of water model experiments and numerical simulations are also performed to explain the results of high-temperature experiments and to elucidate the mechanism of sound wave application. This is explained by two phenomena that occur simultaneously: (1) turbulization of Ar-O2 gas flow by sound wave above the melt surface and (2) motion and agitation of the melt surface when exposed to sound wave. It is found that sound waves can both accelerate and inhibit the decarburization rate depending on the Ar-O2 gas flow rate and the presence of oxide film on the melt surface. The effect of sound waves is clearly observed only at higher sound pressures on resonance frequencies, which are defined by geometrical features of the experimental setup. The resonance phenomenon makes it difficult to separate the effect of sound frequency from that of sound pressure under the present experimental conditions.

  11. Effects of interaural level differences on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency-dependent shaping of binaural cues such as interaural level...... differences (ILDs) and interaural time differences (ITDs). In rooms, the sound reaching the two ears is further modified by reverberant energy, which leads to increased fluctuations in short-term ILDs and ITDs. In the present study, the effect of ILD fluctuations on the externalization of sound......, for sounds that contain frequencies above about 1 kHz the ILD fluctuations were found to be an essential cue for externalization....

  12. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    a health effect has not been documented. In this context, ANSES recommends: Concerning studies and research: - verifying whether or not there is a possible mechanism modulating the perception of audible sound at intensities of infra-sound similar to those measured from local residents; - studying the effects of the amplitude modulation of the acoustic signal on the noise-related disturbance felt; - studying the assumption that cochlea-vestibular effects may be responsible for pathophysiological effects; - undertaking a survey of residents living near wind farms enabling the identification of an objective signature of a physiological effect. Concerning information for local residents and the monitoring of noise levels: - enhancing information for local residents during the construction of wind farms and participation in public inquiries undertaken in rural areas; - systematically measuring the noise emissions of wind turbines before and after they are brought into service; - setting up, especially in the event of controversy, continuous noise measurement systems around wind farms (based on experience at airports, for example). Lastly, the Agency reiterates that the current regulations state that the distance between a wind turbine and the first home should be evaluated on a case-by-case basis, taking the conditions of wind farms into account. This distance, of at least 500 metres, may be increased further to the results of an impact assessment, in order to comply with the limit values for noise exposure. Current knowledge of the potential health effects of exposure to infra-sounds and low-frequency noise provides no justification for changing the current limit values or for extending the spectrum of noise currently taken into consideration

  13. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  14. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  15. SOUND-SPEED INVERSION OF THE SUN USING A NONLOCAL STATISTICAL CONVECTION THEORY

    International Nuclear Information System (INIS)

    Zhang Chunguang; Deng Licai; Xiong Darun; Christensen-Dalsgaard, Jørgen

    2012-01-01

    Helioseismic inversions reveal a major discrepancy in sound speed between the Sun and the standard solar model just below the base of the solar convection zone. We demonstrate that this discrepancy is caused by the inherent shortcomings of the local mixing-length theory adopted in the standard solar model. Using a self-consistent nonlocal convection theory, we construct an envelope model of the Sun for sound-speed inversion. Our solar model has a very smooth transition from the convective envelope to the radiative interior, and the convective energy flux changes sign crossing the boundaries of the convection zone. It shows evident improvement over the standard solar model, with a significant reduction in the discrepancy in sound speed between the Sun and local convection models.

  16. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  17. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  18. Active control of sound transmission through partitions composed of discretely controlled modules

    Science.gov (United States)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation

  19. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    Science.gov (United States)

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  20. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Science.gov (United States)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  1. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Directory of Open Access Journals (Sweden)

    A. Neska

    2018-03-01

    Full Text Available This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding. The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  2. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  3. Comparison of RASS temperature profiles with other tropospheric soundings

    International Nuclear Information System (INIS)

    Bonino, G.; Lombardini, P.P.; Trivero, P.

    1980-01-01

    The vertical temperature profile of the lower troposphere can be measured with a radio-acoustic sounding system (RASS). A comparison of the thermal profiles measured with the RASS and with traditional methods shows a) RASS ability to produce vertical thermal profiles at an altitude range of 170 to 1000 m with temperature accuracy and height discrimination comparable with conventional soundings, b) advantages of remote sensing as offered by new sounder, c) applicability of RASS both in assessing evolution of thermodynamic conditions in PBL and in sensing conditions conducive to high concentrations of air pollutants at the ground level. (author)

  4. Reach on sound: a key to object permanence in visually impaired children.

    Science.gov (United States)

    Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto

    2011-04-01

    The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Reduction of sound transmission through fuselage walls by alternate resonance tuning (A.R.T.)

    Science.gov (United States)

    Bliss, Donald B.; Gottwald, James A.

    1989-01-01

    The ability of alternate resonance tuning (ART) to block sound transmission through light-weight flexible paneled walls by controlling the dynamics of the wall panels is considered. Analytical results for sound transmission through an idealized panel wall illustrate the effect of varying system parameters and show that one or more harmonics of the incident sound field can be cancelled by the present method. Experimental results demonstrate that very large transmission losses with reasonable bandwidths can be achieved by a simple ART panel barrier in a duct.

  6. Sound speeds, cracking and the stability of self-gravitating anisotropic compact objects

    International Nuclear Information System (INIS)

    Abreu, H; Hernandez, H; Nunez, L A

    2007-01-01

    Using the concept of cracking we explore the influence that density fluctuations and local anisotropy have on the stability of local and non-local anisotropic matter configurations in general relativity. This concept, conceived to describe the behavior of a fluid distribution just after its departure from equilibrium, provides an alternative approach to consider the stability of self-gravitating compact objects. We show that potentially unstable regions within a configuration can be identified as a function of the difference of propagations of sound along tangential and radial directions. In fact, it is found that these regions could occur when, at a particular point within the distribution, the tangential speed of sound is greater than the radial one

  7. Distraction by novel and pitch-deviant sounds in children

    Directory of Open Access Journals (Sweden)

    Nicole Wetzel

    2016-12-01

    Full Text Available The control of attention is an important part of our executive functions and enables us to focus on relevant information and to ignore irrelevant information. The ability to shield against distraction by task-irrelevant sounds is suggested to mature during school age. The present study investigated the developmental time course of distraction in three groups of children aged 7 – 10 years. Two different types of distractor sounds that have been frequently used in auditory attention research – novel environmental and pitch-deviant sounds – were presented within an oddball paradigm while children performed a visual categorization task. Reaction time measurements revealed decreasing distractor-related impairment with age. Novel environmental sounds impaired performance in the categorization task more than pitch-deviant sounds. The youngest children showed a pronounced decline of novel-related distraction effects throughout the experimental session. Such a significant decline as a result of practice was not observed in the pitch-deviant condition and not in older children. We observed no correlation between cross-modal distraction effects and performance in standardized tests of concentration and visual distraction. Results of the cross-modal distraction paradigm indicate that separate mechanisms underlying the processing of novel environmental and pitch-deviant sounds develop with different time courses and that these mechanisms develop considerably within a few years in middle childhood.

  8. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  9. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  10. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  11. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  12. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  13. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  14. 76 FR 39292 - Special Local Regulations & Safety Zones; Marine Events in Captain of the Port Long Island Sound...

    Science.gov (United States)

    2011-07-06

    ... Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast... and fireworks displays within the Captain of the Port (COTP) Long Island Sound Zone. This action is... Island Sound. DATES: This rule is effective in the CFR on July 6, 2011 through 6 p.m. on October 2, 2011...

  15. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  16. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  17. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  18. Video and Sound Production: Flip out! Game on!

    Science.gov (United States)

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  19. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  20. Influence of computerized sounding out on spelling performance for children who do and do not rely on AAC.

    Science.gov (United States)

    McCarthy, Jillian H; Hogan, Tiffany P; Beukelman, David R; Schwarz, Ilsa E

    2015-05-01

    Spelling is an important skill for individuals who rely on augmentative and alternative communication (AAC). The purpose of this study was to investigate how computerized sounding out influenced spelling accuracy of pseudo-words. Computerized sounding out was defined as a word elongated, thus providing an opportunity for a child to hear all the sounds in the word at a slower rate. Seven children with cerebral palsy, four who use AAC and three who do not, participated in a single subject AB design. The results of the study indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words produced by participants. The study provides preliminary evidence for the use of computerized sounding out during spelling tasks for children with cerebral palsy who do and do not use AAC. Future directions and clinical implications are discussed. We investigated how computerized sounding out influenced spelling accuracy of pseudowords for children with complex communication needs who did and did not use augmentative and alternative communication (AAC). Results indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words by participants, suggesting that computerized sounding out might assist in more accurate spelling for children who use AAC. Future research is needed to determine how language and reading abilities influence the use of computerized sounding out with children who have a range of speech intelligibility abilities and do and do not use AAC.

  1. Evaluation of the uncertainty in the azimuth calculation for the detection and localization of atmospheric nuclear explosions

    International Nuclear Information System (INIS)

    Schuff, J.A.

    2006-01-01

    Low-frequency acoustic signal below about 1 Hz can travel for hundreds or thousands of kilometers through in the Earth atmosphere. If a source produces infrasonic energy, it can be detected by a remote sensor. Atmospheric strong explosions as the nuclear detonation contains low-frequency components that can travel long distances with measurable signal levels. This fact can be useful for detection and localization of clandestine events. The international regime on the non-proliferation of nuclear requires the ability to detect, localize, and discriminate nuclear events on a global scale. Monitoring systems such as the Inter national Monitoring System (I.M.S.) rely on several sensor technologies to perform these functions. The current I.M.S. infra sound system design includes a network of low-frequency atmospheric acoustic sensor arrays, which contribute primarily to the detection and localization of atmospheric nuclear events. There have been observed differences between the azimuth measurements and the true directions of the sources of infra sound waves in artificial and natural events such as explosive eruptions of strong volcanoes. The infra sound waves are reflected in stratospheric and thermospheric layers near 50 km and 120 km in height respectively. The azimuth deviation is affected by meteorological disturbances in the troposphere and stratosphere. This paper describe new elements to obtain the uncertainty in the azimuth calculation of arrival wave plane passing across of a not plane array of infra sound sensors. It also presents a 3D computation of infra sound propagation and estimation of the azimuth deviation using the zonal horizontal wind model and M.S.I.S.E.-90 model of the upper atmosphere to obtain temperature, density and concentration of the principal components of the air for altitudes of up to 120 km. Deviations of up to 12 degrees in the azimuth were obtained, depending on the location of the source of infra sound, the point of measurement and

  2. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging periodicity-tagged segregation of competing speech in rooms

    Directory of Open Access Journals (Sweden)

    Mark eSayles

    2015-01-01

    Full Text Available The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once, in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation. Brainstem circuits help segregate these complex acoustic mixtures into auditory objects. Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0 modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous.We examine the ability of 129 single units in the ventral cochlear nucleus of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels’ spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels’ spectral energy into two streams (corresponding to the two vowels, on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging

  3. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    Science.gov (United States)

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  4. Novel Application of Glass Fibers Recovered From Waste Printed Circuit Boards as Sound and Thermal Insulation Material

    Science.gov (United States)

    Sun, Zhixing; Shen, Zhigang; Ma, Shulin; Zhang, Xiaojing

    2013-10-01

    The aim of this study is to investigate the feasibility of using glass fibers, a recycled material from waste printed circuit boards (WPCB), as sound absorption and thermal insulation material. Glass fibers were obtained through a fluidized-bed recycling process. Acoustic properties of the recovered glass fibers (RGF) were measured and compared with some commercial sound absorbing materials, such as expanded perlite (EP), expanded vermiculite (EV), and commercial glass fiber. Results show that RGF have good sound absorption ability over the whole tested frequency range (100-6400 Hz). The average sound absorption coefficient of RGF is 0.86, which is prior to those of EP (0.81) and EV (0.73). Noise reduction coefficient analysis indicates that the absorption ability of RGF can meet the requirement of II rating for sound absorbing material according to national standard. The thermal insulation results show that RGF has a fair low thermal conductivity (0.046 W/m K), which is comparable to those of some insulation materials (i.e., EV, EP, and rock wool). Besides, an empirical dependence of thermal conductivity on material temperature was determined for RGF. All the results showed that the reuse of RGF for sound and thermal insulation material provided a promising way for recycling WPCB and obtaining high beneficial products.

  5. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    Science.gov (United States)

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  6. Usefulness of bowel sound auscultation: a prospective evaluation.

    Science.gov (United States)

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  7. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  8. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  9. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  10. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    Science.gov (United States)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  11. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  12. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    Science.gov (United States)

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  13. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus and Humans with Transformations in Amplitude, Duration and Frequency.

    Directory of Open Access Journals (Sweden)

    Brian K Branstetter

    Full Text Available Bottlenose dolphins (Tursiops truncatus use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A in response to a specific sound (sound A for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  14. A Comparative Study of the Effect of Subliminal Messages on Public Speaking Ability.

    Science.gov (United States)

    Schnell, James A.

    A study investigated the effectiveness of subliminal techniques (such as tape recorded programs) for improving public speaking ability. It was hypothesized that students who used subliminal tapes to improve public speaking ability would perform no differently from classmates who listened to identical-sounding placebo tape programs containing no…

  15. Musical expertise and the ability to imagine loudness.

    Directory of Open Access Journals (Sweden)

    Laura Bishop

    Full Text Available Most perceived parameters of sound (e.g. pitch, duration, timbre can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1 while adjusting a slider to indicate imagined loudness of the music and 2 while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.

  16. Musical expertise and the ability to imagine loudness.

    Science.gov (United States)

    Bishop, Laura; Bailes, Freya; Dean, Roger T

    2013-01-01

    Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.

  17. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    Spatial localization of sound is often described as unconscious evaluation of cues given by the interaural time difference (ITD) and the spectral information of the sound that reaches the two ears. Our present knowledge suggests the hypothesis that the ITD roughly determines the cone of the perce...... independently in HRTFs used for binaural synthesis. The ITD seems to be dominant for localization in the horizontal plane even when the spectral information is severely degraded....

  18. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  19. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  20. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  1. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  2. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2006

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2006-08-01

    In the beginning of summer 2006 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (called also Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated in 2005. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings was 48 but at 8 stations the measurement did not succeed because of strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2006 shows small differences at some sounding sites. (orig.)

  3. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  5. An acoustic metamaterial composed of multi-layer membrane-coated perforated plates for low-frequency sound insulation

    Science.gov (United States)

    Fan, Li; Chen, Zhe; Zhang, Shu-yi; Ding, Jin; Li, Xiao-juan; Zhang, Hui

    2015-04-01

    Insulating against low-frequency sound (below 500 Hz ) remains challenging despite the progress that has been achieved in sound insulation and absorption. In this work, an acoustic metamaterial based on membrane-coated perforated plates is presented for achieving sound insulation in a low-frequency range, even covering the lower audio frequency limit, 20 Hz . Theoretical analysis and finite element simulations demonstrate that this metamaterial can effectively block acoustic waves over a wide low-frequency band regardless of incident angles. Two mechanisms, non-resonance and monopolar resonance, operate in the metamaterial, resulting in a more powerful sound insulation ability than that achieved using periodically arranged multi-layer solid plates.

  6. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  7. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  8. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  9. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  10. The Effect of Blindness on Long-Term Episodic Memory for Odors and Sounds

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2018-06-01

    Full Text Available We recently showed that compared with sighted, early blind individuals have better episodic memory for environmental sounds, but not odors, after a short retention interval (∼ 8 – 9 min. Few studies have investigated potential effects of blindness on memory across long time frames, such as months or years. Consequently, it was unclear whether compensatory effects may vary as a function of retention interval. In this study, we followed-up participants (N = 57 out of 60 approximately 1 year after the initial testing and retested episodic recognition for environmental sounds and odors, and identification ability. In contrast to our previous findings, the early blind participants (n = 14 performed at a similar level as the late blind (n = 13 and sighted (n = 30 participants for sound recognition. Moreover, the groups had similar recognition performance of odors and identification ability of odors and sounds. These findings suggest that episodic odor memory is unaffected by blindness after both short and long retention intervals. However, the effect of blindness on episodic memory for sounds may vary as a function of retention interval, such that early blind individuals have an advantage over sighted across short but not long time frames. We speculate that the finding of a differential effect of blindness on auditory episodic memory across retention intervals may be related to different memory strategies at initial and follow-up assessments. In conclusion, this study suggests that blindness does not influence auditory or olfactory episodic memory as assessed after a long retention interval.

  11. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2007

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2007-09-01

    In the beginning of June 2007 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated after it yearly. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings stations is 48. In 2007 at 8 sounding stations the transmitter and/or receiver sites were changed and the line L11.400 was substituted by line L11.500. Some changes helped but anyway there were 6 stations that could not be measured because of the strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2007 shows small differences at some sounding sites. (orig.)

  12. Asymmetries in global-local processing ability in elderly people with the apolipoprotein e-epsilon4 allele.

    Science.gov (United States)

    Jacobson, Mark W; Delis, Dean C; Lansing, Amy; Houston, Wes; Olsen, Ryan; Wetter, Spencer; Bondi, Mark W; Salmon, David P

    2005-11-01

    Previous studies have identified cognitive asymmetries in elderly people at increased risk for Alzheimer's disease (AD) by comparing standardized neuropsychological tests of verbal and spatial abilities in both preclinical AD and apolipoprotein epsilon4+ elderly groups. This prospective study investigated cognitive asymmetries within a single test by comparing cognitively intact elderly (with and without the epsilon4+ allele) on a learning and memory measure that uses global and local visuospatial stimuli. Both groups demonstrated comparable overall learning and recall. But the epsilon4+ group had a significantly larger discrepancy between their global and local learning scores and had a greater proportion of individuals with more than a one standard deviation difference between their immediate recall of the global and local elements, relative to the epsilon4- group. These findings build on previous studies identifying subgroups of elderly people at greater risk for AD who often demonstrate increased cognitive asymmetries relative to groups without significant risk factors. Copyright (c) 2005 APA, all rights reserved.

  13. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  14. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  15. Letter Names, Letter Sounds and Phonological Awareness: An Examination of Kindergarten Children across Letters and of Letters across Children

    Science.gov (United States)

    Evans, Mary Ann; Bell, Michelle; Shaw, Deborah; Moretti, Shelley; Page, Jodi

    2006-01-01

    In this study 149 kindergarten children were assessed for knowledge of letter names and letter sounds, phonological awareness, and cognitive abilities. Through this it examined child and letter characteristics influencing the acquisition of alphabetic knowledge in a naturalistic context, the relationship between letter-sound knowledge and…

  16. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  17. Local Control of Audio Environment: A Review of Methods and Applications

    Directory of Open Access Journals (Sweden)

    Jussi Kuutti

    2014-02-01

    Full Text Available The concept of a local audio environment is to have sound playback locally restricted such that, ideally, adjacent regions of an indoor or outdoor space could exhibit their own individual audio content without interfering with each other. This would enable people to listen to their content of choice without disturbing others next to them, yet, without any headphones to block conversation. In practice, perfect sound containment in free air cannot be attained, but a local audio environment can still be satisfactorily approximated using directional speakers. Directional speakers may be based on regular audible frequencies or they may employ modulated ultrasound. Planar, parabolic, and array form factors are commonly used. The directivity of a speaker improves as its surface area and sound frequency increases, making these the main design factors for directional audio systems. Even directional speakers radiate some sound outside the main beam, and sound can also reflect from objects. Therefore, directional speaker systems perform best when there is enough ambient noise to mask the leaking sound. Possible areas of application for local audio include information and advertisement audio feed in commercial facilities, guiding and narration in museums and exhibitions, office space personalization, control room messaging, rehabilitation environments, and entertainment audio systems.

  18. Nonlocal nonlinear coupling of kinetic sound waves

    Directory of Open Access Journals (Sweden)

    O. Lyubchyk

    2014-11-01

    Full Text Available We study three-wave resonant interactions among kinetic-scale oblique sound waves in the low-frequency range below the ion cyclotron frequency. The nonlinear eigenmode equation is derived in the framework of a two-fluid plasma model. Because of dispersive modifications at small wavelengths perpendicular to the background magnetic field, these waves become a decay-type mode. We found two decay channels, one into co-propagating product waves (forward decay, and another into counter-propagating product waves (reverse decay. All wavenumbers in the forward decay are similar and hence this decay is local in wavenumber space. On the contrary, the reverse decay generates waves with wavenumbers that are much larger than in the original pump waves and is therefore intrinsically nonlocal. In general, the reverse decay is significantly faster than the forward one, suggesting a nonlocal spectral transport induced by oblique sound waves. Even with low-amplitude sound waves the nonlinear interaction rate is larger than the collisionless dissipation rate. Possible applications regarding acoustic waves observed in the solar corona, solar wind, and topside ionosphere are briefly discussed.

  19. Assessment of sound quality perception in cochlear implant users during music listening.

    Science.gov (United States)

    Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2012-04-01

    Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this

  20. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  1. Associations between autistic traits and emotion recognition ability in non-clinical young adults

    OpenAIRE

    Lindahl, Christina

    2013-01-01

    This study investigated the associations between emotion recognition ability and autistic traits in a sample of non-clinical young adults. Two hundred and forty nine individuals took part in an emotion recognition test, which assessed recognition of 12 emotions portrayed by actors. Emotion portrayals were presented as short video clips, both with and without sound, and as sound only. Autistic traits were assessed using the Autism Spectrum Quotient (ASQ) questionnaire. Results showed that men ...

  2. Puget Sound area electric reliability plan. Draft environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power & Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound`s power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  3. The challenge of localizing vehicle backup alarms: Effects of passive and electronic hearing protectors, ambient noise level, and backup alarm spectral content

    Directory of Open Access Journals (Sweden)

    Khaled A Alali

    2011-01-01

    Full Text Available A human factors experiment employed a hemi-anechoic sound field in which listeners were required to localize a vehicular backup alarm warning signal (both a standard and a frequency-augmented alarm in 360-degrees azimuth in pink noise of 60 dBA and 90 dBA. Measures of localization performance included: (1 percentage correct localization, (2 percentage of right--left localization errors, (3 percentage of front-rear localization errors, and (4 localization absolute deviation in degrees from the alarm′s actual location. In summary, the data demonstrated that, with some exceptions, normal hearing listeners′ ability to localize the backup alarm in 360-degrees azimuth did not improve when wearing augmented hearing protectors (including dichotic sound transmission earmuffs, flat attenuation earplugs, and level-dependent earplugs as compared to when wearing conventional passive earmuffs or earplugs of the foam or flanged types. Exceptions were that in the 90 dBA pink noise, the flat attenuation earplug yielded significantly better accuracy than the polyurethane foam earplug and both the dichotic and the custom-made diotic electronic sound transmission earmuffs. However, the flat attenuation earplug showed no benefit over the standard pre-molded earplug, the arc earplug, and the passive earmuff. Confusions of front-rear alarm directions were most significant in the 90 dBA noise condition, wherein two types of triple-flanged earplugs exhibited significantly fewer front-rear confusions than either of the electronic muffs. On all measures, the diotic sound transmission earmuff resulted in the poorest localization of any of the protectors due to the fact that its single-microphone design did not enable interaural cues to be heard. Localization was consistently more degraded in the 90 dBA pink noise as compared with the relatively quiet condition of the 60 dBA pink noise. A frequency-augmented backup alarm, which incorporated 400 Hz and 4000 Hz components

  4. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  5. The effects of intervening interference on working memory for sound location as a function of inter-comparison interval.

    Science.gov (United States)

    Ries, Dennis T; Hamilton, Traci R; Grossmann, Aurora J

    2010-09-01

    This study examined the effects of inter-comparison interval duration and intervening interference on auditory working memory (AWM) for auditory location. Interaural phase differences were used to produce localization cues for tonal stimuli and the difference limen for interaural phase difference (DL-IPD) specified as the equivalent angle of incidence between two sound sources was measured in five different conditions. These conditions consisted of three different inter-comparison intervals [300 ms (short), 5000 ms (medium), and 15,000 ms (long)], the medium and long of which were presented both in the presence and absence of intervening tones. The presence of intervening stimuli within the medium and long inter-comparison intervals produced a significant increase in the DL-IPD compared to the medium and long inter-comparison intervals condition without intervening tones. The result obtained in the condition with a short inter-comparison interval was roughly equivalent to that obtained for the medium inter-comparison interval without intervening tones. These results suggest that the ability to retain information about the location of a sound within AWM decays slowly; however, the presence of intervening sounds readily disrupts the retention process. Overall, the results suggest that the temporal decay of information within AWM regarding the location of a sound from a listener's environment is so gradual that it can be maintained in trace memory for tens of seconds in the absence of intervening acoustic signals. Conversely, the presence of intervening sounds within the retention interval may facilitate the use of context memory, even for shorter retention intervals, resulting in a less detailed, but relevant representation of the location that is resistant to further degradation. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  6. In Search of the Golden Age Hip-Hop Sound (1986–1996

    Directory of Open Access Journals (Sweden)

    Ben Duinker

    2017-09-01

    Full Text Available The notion of a musical repertoire's "sound" is frequently evoked in journalism and scholarship, but what parameters comprise such a sound? This question is addressed through a statistically-driven corpus analysis of hip-hop music released during the genre's Golden Age era. The first part of the paper presents a methodology for developing, transcribing, and analyzing a corpus of 100 hip-hop tracks released during the Golden Age. Eight categories of aurally salient musical and production parameters are analyzed: tempo, orchestration and texture, harmony, form, vocal and lyric profiles, global and local production effects, vocal doubling and backing, and loudness and compression. The second part of the paper organizes the analysis data into three trend categories: trends of change (parameters that change over time, trends of prevalence (parameters that remain generally constant across the corpus, and trends of similarity (parameters that are similar from song to song. These trends form a generalized model of the Golden Age hip-hop sound which considers both global (the whole corpus and local (unique songs within the corpus contexts. By operationalizing "sound" as the sum of musical and production parameters, aspects of popular music that are resistant to traditional music-analytical methods can be considered.

  7. Observations of the sound producing organs in achelate lobster larvae

    Directory of Open Access Journals (Sweden)

    John A. Fornshell

    2017-06-01

    Full Text Available The Achelata, lobsters lacking claws and having a phyllosoma larva, are divided into two families, the Palinuridae or spiny lobsters and the Scyllaridae or slipper lobsters. Within the Palinuridae adults of two groups were identified by Parker (1884, the Stridentesthat are capable of producing sounds, and the Silentesthat are not known to produce sounds. The Stridentes employ a file-like structure on the dorsal surface of the cephalon and a plectrum consisting of a series of ridges on the proximal segment of the second antenna to produce their sounds. All species of Achelata hatch as an unpigmented thin phyllosoma larva. The phyllosoma larva of the Stridentes have a presumptive file-like structure on the dorsal cephalon. A similar file-like structure is found on the cephalon of one species of Silentes, Palinurellus wienckki, and some but not all of the phyllosoma larvae of the Scyllaridae. No presumptive plectrum is found on the second antenna of any of the phyllosoma larvae. Presence of a presumptive file-like structure on phyllosoma larvae of Silentes and Scyllaridae suggests that the ability to produce sounds may have been lost secondarily in the Silentes and Scyllaridae.

  8. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  9. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  10. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  11. The role of auditory abilities in basic mechanisms of cognition in older adults

    Directory of Open Access Journals (Sweden)

    Massimo eGrassi

    2013-10-01

    Full Text Available The aim of this study was to assess age-related differences between young and older adults in auditory abilities and to investigate the relationship between auditory abilities and basic mechanisms of cognition in older adults. Although there is a certain consensus that the participant’s sensitivity to the absolute intensity of sounds (such as that measured via pure tone audiometry explains his/her cognitive performance, there is not yet much evidence that the participant’s auditory ability (i.e., the whole supra-threshold processing of sounds explains his/her cognitive performance. Twenty-eight young adults (age < 35, 26 young-old adults (65 ≤ age ≤75 and 28 old-old adults (age > 75 were presented with a set of tasks estimating several auditory abilities (i.e., frequency discrimination, intensity discrimination, duration discrimination, timbre discrimination, gap detection, amplitude modulation detection, and the absolute threshold for a 1 kHz pure tone and the participant’s working memory, cognitive inhibition, and processing speed. Results showed an age-related decline in both auditory and cognitive performance. Moreover, regression analyses showed that a subset of the auditory abilities (i.e., the ability to discriminate frequency, duration, timbre, and the ability to detect amplitude modulation explained a significant part of the variance observed in processing speed in older adults. Overall, the present results highlight the relationship between auditory abilities and basic mechanisms of cognition.

  12. Sound-based assistive technology support to hearing, speaking and seeing

    CERN Document Server

    Ifukube, Tohru

    2017-01-01

    This book "Sound-based Assistive Technology" explains a technology to help speech-, hearing- and sight-impaired people. They might benefit in some way from an enhancement in their ability to recognize and produce speech or to detect sounds in their surroundings. Additionally, it is considered how sound-based assistive technology might be applied to the areas of speech recognition, speech synthesis, environmental recognition, virtual reality and robots. It is the primary focus of this book to provide an understanding of both the methodology and basic concepts of assistive technology rather than listing the variety of assistive devices developed in Japan or other countries. Although this book presents a number of different topics, they are sufficiently independent from one another that the reader may begin at any chapter without experiencing confusion. It should be acknowledged that much of the research quoted in this book was conducted in the author's laboratories both at Hokkaido University and the University...

  13. Environmentally sound management of hazardous waste and hazardous recyclable materials

    International Nuclear Information System (INIS)

    Smyth, T.

    2002-01-01

    Environmentally sound management or ESM has been defined under the Basel Convention as 'taking all practicable steps to ensure that hazardous wastes and other wastes are managed in a manner which will protect human health and the environment against the adverse effects which may result from such wastes'. An initiative is underway to develop and implement a Canadian Environmentally Sound Management (ESM) regime for both hazardous wastes and hazardous recyclable materials. This ESM regime aims to assure equivalent minimum environmental protection across Canada while respecting regional differences. Cooperation and coordination between the federal government, provinces and territories is essential to the development and implementation of ESM systems since waste management is a shared jurisdiction in Canada. Federally, CEPA 1999 provides an opportunity to improve Environment Canada's ability to ensure that all exports and imports are managed in an environmentally sound manner. CEPA 1999 enabled Environment Canada to establish criteria for environmentally sound management (ESM) that can be applied by importers and exporters in seeking to ensure that wastes and recyclable materials they import or export will be treated in an environmentally sound manner. The ESM regime would include the development of ESM principles, criteria and guidelines relevant to Canada and a procedure for evaluating ESM. It would be developed in full consultation with stakeholders. The timeline for the development and implementation of the ESM regime is anticipated by about 2006. (author)

  14. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  15. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  16. Topological acoustic polaritons: robust sound manipulation at the subwavelength scale

    International Nuclear Information System (INIS)

    Yves, Simon; Fleury, Romain; Lemoult, Fabrice; Fink, Mathias; Lerosey, Geoffroy

    2017-01-01

    Topological insulators, a hallmark of condensed matter physics, have recently reached the classical realm of acoustic waves. A remarkable property of time-reversal invariant topological insulators is the presence of unidirectional spin-polarized propagation along their edges, a property that could lead to a wealth of new opportunities in the ability to guide and manipulate sound. Here, we demonstrate and study the possibility to induce topologically non-trivial acoustic states at the deep subwavelength scale, in a structured two-dimensional metamaterial composed of Helmholtz resonators. Radically different from previous designs based on non-resonant sonic crystals, our proposal enables robust sound manipulation on a surface along predefined, subwavelength pathways of arbitrary shapes. (paper)

  17. Topological acoustic polaritons: robust sound manipulation at the subwavelength scale

    Science.gov (United States)

    Yves, Simon; Fleury, Romain; Lemoult, Fabrice; Fink, Mathias; Lerosey, Geoffroy

    2017-07-01

    Topological insulators, a hallmark of condensed matter physics, have recently reached the classical realm of acoustic waves. A remarkable property of time-reversal invariant topological insulators is the presence of unidirectional spin-polarized propagation along their edges, a property that could lead to a wealth of new opportunities in the ability to guide and manipulate sound. Here, we demonstrate and study the possibility to induce topologically non-trivial acoustic states at the deep subwavelength scale, in a structured two-dimensional metamaterial composed of Helmholtz resonators. Radically different from previous designs based on non-resonant sonic crystals, our proposal enables robust sound manipulation on a surface along predefined, subwavelength pathways of arbitrary shapes.

  18. Optimal Prediction of Moving Sound Source Direction in the Owl.

    Directory of Open Access Journals (Sweden)

    Weston Cox

    2015-07-01

    Full Text Available Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.

  19. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  20. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    Science.gov (United States)

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  1. Boundary stabilization of memory-type thermoelasticity with second sound

    Science.gov (United States)

    Mustafa, Muhammad I.

    2012-08-01

    In this paper, we consider an n-dimensional thermoelastic system of second sound with a viscoelastic damping localized on a part of the boundary. We establish an explicit and general decay rate result that allows a wider class of relaxation functions and generalizes previous results existing in the literature.

  2. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    Science.gov (United States)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.

  3. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  4. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  5. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  6. Spatial hearing ability of the pigmented Guinea pig (Cavia porcellus): Minimum audible angle and spatial release from masking in azimuth.

    Science.gov (United States)

    Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J

    2018-08-01

    time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  8. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  9. Contribution of self-motion perception to acoustic target localization.

    Science.gov (United States)

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  10. Puget Sound Area Electric Reliability Plan : Draft Environmental Impact State.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound's power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  11. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  12. Comparison of sound propagation and perception of three types of backup alarms with regards to worker safety

    Directory of Open Access Journals (Sweden)

    Véronique Vaillancourt

    2013-01-01

    Full Text Available A technology of backup alarms based on the use of a broadband signal has recently gained popularity in many countries. In this study, the performance of this broadband technology is compared to that of a conventional tonal alarm and a multi-tone alarm from a worker-safety standpoint. Field measurements of sound pressure level patterns behind heavy vehicles were performed in real work environments and psychoacoustic measurements (sound detection thresholds, equal loudness, perceived urgency and sound localization were carried out in the laboratory with human subjects. Compared with the conventional tonal alarm, the broadband alarm generates a much more uniform sound field behind vehicles, is easier to localize in space and is judged slighter louder at representative alarm levels. Slight advantages were found with the tonal alarm for sound detection and for perceived urgency at low levels, but these benefits observed in laboratory conditions would not overcome the detrimental effects associated with the large and abrupt variations in sound pressure levels (up to 15-20 dB within short distances observed in the field behind vehicles for this alarm, which are significantly higher than those obtained with the broadband alarm. Performance with the multi-tone alarm generally fell between that of the tonal and broadband alarms on most measures.

  13. Effects of task-switching on neural representations of ambiguous sound input.

    Science.gov (United States)

    Sussman, Elyse S; Bregman, Albert S; Lee, Wei-Wei

    2014-11-01

    The ability to perceive discrete sound streams in the presence of competing sound sources relies on multiple mechanisms that organize the mixture of the auditory input entering the ears. Many studies have focused on mechanisms that contribute to integrating sounds that belong together into one perceptual stream (integration) and segregating those that come from different sound sources (segregation). However, little is known about mechanisms that allow us to perceive individual sound sources within a dynamically changing auditory scene, when the input may be ambiguous, and heard as either integrated or segregated. This study tested the question of whether focusing on one of two possible sound organizations suppressed representation of the alternative organization. We presented listeners with ambiguous input and cued them to switch between tasks that used either the integrated or the segregated percept. Electrophysiological measures indicated which organization was currently maintained in memory. If mutual exclusivity at the neural level was the rule, attention to one of two possible organizations would preclude neural representation of the other. However, significant MMNs were elicited to both the target organization and the unattended, alternative organization, along with the target-related P3b component elicited only to the designated target organization. Results thus indicate that both organizations (integrated and segregated) were simultaneously maintained in memory regardless of which task was performed. Focusing attention to one aspect of the sounds did not abolish the alternative, unattended organization when the stimulus input was ambiguous. In noisy environments, such as walking on a city street, rapid and flexible adaptive processes are needed to help facilitate rapid switching to different sound sources in the environment. Having multiple representations available to the attentive system would allow for such flexibility, needed in everyday situations to

  14. Speed of sound reflects Young's modulus as assessed by microstructural finite element analysis

    NARCIS (Netherlands)

    Bergh, van den J.P.W.; Lenthe, van G.H.; Hermus, A.R.M.M.; Corstens, F.H.M.; Smals, A.G.H.; Huiskes, H.W.J.

    2000-01-01

    We analyzed the ability of the quantitative ultrasound (QUS) parameter, speed of sound (SOS), and bone mineral density (BMD), as measured by dual-energy X-ray absorptiometry (DXA), to predict Young's modulus, as assessed by microstructural finite element analysis (muFEA) from microcomputed

  15. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  16. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  17. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  18. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  19. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  20. A randomized trial of nature scenery and sounds versus urban scenery and sounds to reduce pain in adults undergoing bone marrow aspirate and biopsy.

    Science.gov (United States)

    Lechtzin, Noah; Busse, Anne M; Smith, Michael T; Grossman, Stuart; Nesbit, Suzanne; Diette, Gregory B

    2010-09-01

    Bone marrow aspiration and biopsy (BMAB) is painful when performed with only local anesthetic. Our objective was to determine whether viewing nature scenes and listening to nature sounds can reduce pain during BMAB. This was a randomized, controlled clinical trial. Adult patients undergoing outpatient BMAB with only local anesthetic were assigned to use either a nature scene with accompanying nature sounds, city scene with city sounds, or standard care. The primary outcome was a visual analog scale (0-10) of pain. Prespecified secondary analyses included categorizing pain as mild and moderate to severe and using multiple logistic regression to adjust for potential confounding variables. One hundred and twenty (120) subjects were enrolled: 44 in the Nature arm, 39 in the City arm, and 37 in the Standard Care arm. The mean pain scores, which were the primary outcome, were not significantly different between the three arms. A higher proportion in the Standard Care arm had moderate-to-severe pain (pain rating ≥4) than in the Nature arm (78.4% versus 60.5%), though this was not statistically significant (p = 0.097). This difference was statistically significant after adjusting for differences in the operators who performed the procedures (odds ratio = 3.71, p = 0.02). We confirmed earlier findings showing that BMAB is poorly tolerated. While mean pain scores were not significantly different between the study arms, secondary analyses suggest that viewing a nature scene while listening to nature sounds is a safe, inexpensive method that may reduce pain during BMAB. This approach should be considered to alleviate pain during invasive procedures.

  1. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  2. Use of Income as a Measure of Local Fiscal Ability in the State School Aid Formula. Occasional Paper #10.

    Science.gov (United States)

    Samter, Eugene C.

    It is often suggested that measuring local fiscal ability by full valuation of property per public school pupil is inaccurate and inequitable. One substitute measure proposed is district income per pupil or a combination of district income and property value per pupil. However, using this measure would result in a rise in the aid ratios in only…

  3. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  4. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  5. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  6. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  7. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  8. Sound Environments Surrounding Preterm Infants Within an Occupied Closed Incubator.

    Science.gov (United States)

    Shimizu, Aya; Matsuo, Hiroya

    2016-01-01

    Preterm infants often exhibit functional disorders due to the stressful environment in the neonatal intensive care unit (NICU). The sound pressure level (SPL) in the NICU is often much higher than the levels recommended by the American Academy of Pediatrics. Our study aims to describe the SPL and sound frequency levels surrounding preterm infants within closed incubators that utilize high frequency oscillation (HFO) or nasal directional positive airway pressure (nasal-DPAP) respiratory settings. This is a descriptive research study of eight preterm infants (corrected agenoise levels were observed and the results were compared to the recommendations made by neonatal experts. Increased noise levels, which have reported to affect neonates' ability to self-regulate, could increase the risk of developing attention deficit disorder, and may result in tachycardia, bradycardia, increased intracranial pressure, and hypoxia. The care provider should closely assess for adverse effects of higher sound levels generated by different modes of respiratory support and take measures to ensure that preterm infants are protected from exposure to noise exceeding the optimal safe levels. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Documenting Horror: The Use of Sound in Non-Fiction 9/11 Films

    Directory of Open Access Journals (Sweden)

    Jesse Schlotterbeck

    2011-09-01

    Full Text Available While conventional 9/11 documentaries focus on the most known and visible images of the attack, three films that work against this tendency, 9/11 (2002, 11'09''01 - September 11 (2002 and Fahrenheit 9/11 (2004, avoid the television news coverage of the towers and portray the attacks primarily through sound. These films avoid or scantly interject the too familiar footage, working instead with the audio track’s ability to convey the horrors of the event.  By emphasizing sound, these films address a challenge familiar to documentary studies: how to appropriately represent a historical event whose tragic scale makes aesthetic representation questionable.

  10. Mercury in Long Island Sound sediments

    Science.gov (United States)

    Varekamp, J.C.; Buchholtz ten Brink, Marilyn R.; Mecray, E.I.; Kreulen, B.

    2000-01-01

    Mercury (Hg) concentrations were measured in 394 surface and core samples from Long Island Sound (LIS). The surface sediment Hg concentration data show a wide spread, ranging from 600 ppb Hg in westernmost LIS. Part of the observed range is related to variations in the bottom sedimentary environments, with higher Hg concentrations in the muddy depositional areas of central and western LIS. A strong residual trend of higher Hg values to the west remains when the data are normalized to grain size. Relationships between a tracer for sewage effluents (C. perfringens) and Hg concentrations indicate that between 0-50 % of the Hg is derived from sewage sources for most samples from the western and central basins. A higher percentage of sewage-derived Hg is found in samples from the westernmost section of LIS and in some local spots near urban centers. The remainder of the Hg is carried into the Sound with contaminated sediments from the watersheds and a small fraction enters the Sound as in situ atmospheric deposition. The Hg-depth profiles of several cores have well-defined contamination profiles that extend to pre-industrial background values. These data indicate that the Hg levels in the Sound have increased by a factor of 5-6 over the last few centuries, but Hg levels in LIS sediments have declined in modern times by up to 30 %. The concentrations of C. perfringens increased exponentially in the top core sections which had declining Hg concentrations, suggesting a recent decline in Hg fluxes that are unrelated to sewage effluents. The observed spatial and historical trends show Hg fluxes to LIS from sewage effluents, contaminated sediment input from the Connecticut River, point source inputs of strongly contaminated sediment from the Housatonic River, variations in the abundance of Hg carrier phases such as TOC and Fe, and focusing of sediment-bound Hg in association with westward sediment transport within the Sound.

  11. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  12. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  13. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  14. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  15. Three-dimensional interpretation of TEM soundings

    Science.gov (United States)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  16. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    Directory of Open Access Journals (Sweden)

    Ji-Ho Chang

    2017-03-01

    Full Text Available This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive to room reflections and the amplitude decay than the spatial error, which is likely to agree better with the human perception of source localization.

  17. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  18. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  19. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  20. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  1. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  2. Puget Sound Area Electric Reliability Plan : Final Environmental Impact Statement.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1992-04-01

    A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, and during certain conditions, there is more demand for power in the Puget Sound area than the transmission system and existing generation can reliably supply. This high demand, called peak demand occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both.

  3. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  4. Possibilities of spatial hearing testing in occupational medicine

    Directory of Open Access Journals (Sweden)

    Tomasz Przewoźny

    2016-08-01

    Full Text Available Dysfunctions of the organ of hearing are a significant limitation in the performance of occupations that require its full efficiency (vehicle driving, army, police, fire brigades, mining. Hearing impairment is associated with poorer understanding of speech and disturbed sound localization that directly affects the worker’s orientation in space and his/her assessment of distance and location of other workers or, even most importantly, of dangerous machines. Testing sound location abilities is not a standard procedure, even in highly specialized audiological examining rooms. It should be pointed out that the ability to localize sounds which are particularly loud, is not directly associated with the condition of the hearing organ, but is rather considered an auditory function of a higher level. Disturbances in sound localization are mainly associated with structural and functional disturbances of the central nervous system and occur also in patients with normal hearing when tested with standard methods. The article presents different theories explaining the phenomenon of sound localization, such as interaural differences in time, interaural differences in sound intensity, monaural spectrum shape and the anatomical and physiological basis of these processes. It also describes methods of measurement of disturbances in sound localization which are used in Poland and around the world, also by the author of this work. The author analyzed accessible reports on sound localization testing in occupational medicine and the possibilities of using such tests in various occupations requiring full fitness of the organ of hearing.

  5. Perception of Animacy from the Motion of a Single Sound Object.

    Science.gov (United States)

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  6. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  7. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  8. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  9. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  10. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  11. An Algorithm for the Accurate Localization of Sounds

    National Research Council Canada - National Science Library

    MacDonald, Justin A

    2005-01-01

    .... The algorithm requires no a priori knowledge of the stimuli to be localized. The accuracy of the algorithm was tested using binaural recordings from a pair of microphones mounted in the ear canals of an acoustic mannequin...

  12. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    DEFF Research Database (Denmark)

    Chang, Ji-ho; Jeong, Cheol-Ho

    2017-01-01

    This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two...... conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due...... to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive...

  13. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    Science.gov (United States)

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  14. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    Science.gov (United States)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  15. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    Science.gov (United States)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  16. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  17. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  18. An open-structure sound insulator against low-frequency and wide-band acoustic waves

    Science.gov (United States)

    Chen, Zhe; Fan, Li; Zhang, Shu-yi; Zhang, Hui; Li, Xiao-juan; Ding, Jin

    2015-10-01

    To block sound, i.e., the vibration of air, most insulators are based on sealed structures and prevent the flow of the air. In this research, an acoustic metamaterial adopting side structures, loops, and labyrinths, arranged along a main tube, is presented. By combining the accurately designed side structures, an extremely wide forbidden band with a low cut-off frequency of 80 Hz is produced, which demonstrates a powerful low-frequency and wide-band sound insulation ability. Moreover, by virtue of the bypass arrangement, the metamaterial is based on an open structure, and thus air flow is allowed while acoustic waves can be insulated.

  19. Engaging teachers & students in geosciences by exploring local geoheritage sites

    Science.gov (United States)

    Gochis, E. E.; Gierke, J. S.

    2014-12-01

    Understanding geoscience concepts and the interactions of Earth system processes in one's own community has the potential to foster sound decision making for environmental, economic and social wellbeing. School-age children are an appropriate target audience for improving Earth Science literacy and attitudes towards scientific practices. However, many teachers charged with geoscience instruction lack awareness of local geological significant examples or the pedagogical ability to integrate place-based examples into their classroom practice. This situation is further complicated because many teachers of Earth science lack a firm background in geoscience course work. Strategies for effective K-12 teacher professional development programs that promote Earth Science literacy by integrating inquiry-based investigations of local and regional geoheritage sites into standards based curriculum were developed and tested with teachers at a rural school on the Hannahville Indian Reservation located in Michigan's Upper Peninsula. The workshops initiated long-term partnerships between classroom teachers and geoscience experts. We hypothesize that this model of professional development, where teachers of school-age children are prepared to teach local examples of earth system science, will lead to increased engagement in Earth Science content and increased awareness of local geoscience examples by K-12 students and the public.

  20. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  1. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  2. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  3. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  4. Fast detection of unexpected sound intensity decrements as revealed by human evoked potentials.

    Directory of Open Access Journals (Sweden)

    Heike Althen

    Full Text Available The detection of deviant sounds is a crucial function of the auditory system and is reflected by the automatically elicited mismatch negativity (MMN, an auditory evoked potential at 100 to 250 ms from stimulus onset. It has recently been shown that rarely occurring frequency and location deviants in an oddball paradigm trigger a more negative response than standard sounds at very early latencies in the middle latency response of the human auditory evoked potential. This fast and early ability of the auditory system is corroborated by the finding of neurons in the animal auditory cortex and subcortical structures, which restore their adapted responsiveness to standard sounds, when a rare change in a sound feature occurs. In this study, we investigated whether the detection of intensity deviants is also reflected at shorter latencies than those of the MMN. Auditory evoked potentials in response to click sounds were analyzed regarding the auditory brain stem response, the middle latency response (MLR and the MMN. Rare stimuli with a lower intensity level than standard stimuli elicited (in addition to an MMN a more negative potential in the MLR at the transition from the Na to the Pa component at circa 24 ms from stimulus onset. This finding, together with the studies about frequency and location changes, suggests that the early automatic detection of deviant sounds in an oddball paradigm is a general property of the auditory system.

  5. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  6. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  7. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  8. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  9. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  10. Sound source location in cavitating tip vortices

    International Nuclear Information System (INIS)

    Higuchi, H.; Taghavi, R.; Arndt, R.E.A.

    1985-01-01

    Utilizing an array of three hydrophones, individual cavitation bursts in a tip vortex could be located. Theoretically, four hydrophones are necessary. Hence the data from three hydrophones are supplemented with photographic observation of the cavitating tip vortex. The cavitation sound sources are found to be localized to within one base chord length from the hydrofoil tip. This appears to correspond to the region of initial tip vortex roll-up. A more extensive study with a four sensor array is now in progress

  11. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  12. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  13. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  14. Songbirds and humans apply different strategies in a sound sequence discrimination task

    Directory of Open Access Journals (Sweden)

    Yoshimasa eSeki

    2013-07-01

    Full Text Available The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an AAB or ABB rule. The sound elements used were taken from a variety of male (M and female (F calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: 1 memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and 2 using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e. AAB and ABB; MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  15. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    Science.gov (United States)

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  16. Combined effect of boundary layer recirculation factor and stable energy on local air quality in the Pearl River Delta over southern China.

    Science.gov (United States)

    Li, Haowen; Wang, Baomin; Fang, Xingqin; Zhu, Wei; Fan, Qi; Liao, Zhiheng; Liu, Jian; Zhang, Asi; Fan, Shaojia

    2018-03-01

    Atmospheric boundary layer (ABL) has a significant impact on the spatial and temporal distribution of air pollutants. In order to gain a better understanding of how ABL affects the variation of air pollutants, atmospheric boundary layer observations were performed at Sanshui in the Pearl River Delta (PRD) region over southern China during the winter of 2013. Two types of typical ABL status that could lead to air pollution were analyzed comparatively: weak vertical diffusion ability type (WVDAT) and weak horizontal transportation ability type (WHTAT). Results show that (1) WVDAT was featured by moderate wind speed, consistent wind direction, and thick inversion layer at 600~1000 m above ground level (AGL), and air pollutants were restricted in the low altitudes due to the stable atmospheric structure; (2) WHTAT was characterized by calm wind, varied wind direction, and shallow intense ground inversion layer, and air pollutants accumulated in locally because of strong recirculation in the low ABL; (3) recirculation factor (RF) and stable energy (SE) were proved to be good indicators for horizontal transportation ability and vertical diffusion ability of the atmosphere, respectively. Combined utilization of RF and SE can be very helpful in the evaluation of air pollution potential of the ABL. Air quality data from ground and meteorological data collected from radio sounding in Sanshui in the Pearl River Delta showed that local air quality was poor when wind reversal was pronounced or temperature stratification state was stable. The combination of horizontal and vertical transportation ability of the local atmosphere should be taken into consideration when evaluating local environmental bearing capacity for air pollution.

  17. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  18. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  19. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    Science.gov (United States)

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  20. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  1. The diagnostics of a nuclear reactor by the analysis of boiling sound

    International Nuclear Information System (INIS)

    Kudo, Kazuhiko; Tanaka, Yoshihisa; Ohsawa, Takaaki; Ohta, Masao

    1980-01-01

    This paper is described on the basic research concerning the method of detecting abnormality by analyzing boiling sound when the heat transfer to coolant became locally abnormal in a pressurized nuclear reactor. In this study, the power spectra of sound were grasped as a sort of pattern, and it was aimed at to diagnose exactly the state in a reactor by analyzing the change with an electronic computer. As the calculating method, the theory of linear distinction function was applied. The subcritical experimental apparatus was used as a simulated reactor core vessel, and boiling sound was received with a hydrophone, amplified, digitalized and processed with a computer. The power spectra of boiling sound were displayed on an oscilloscope, and the digital values were stored in a micro-computer. The method of calculating treatment of the power spectra stored as the data in the microcomputer is explained. The magnitude of the power spectra was large in low frequency region, and decreased as the frequency became higher. The experimental conditions and the results are described. According to the results, considerably good distinction capability was obtained. By utilizing the power spectra in relatively low frequency region, the detection of boiling sound can be made with considerably high accuracy. (Kako, I.)

  2. The improvement of PWR(OPR-1000) Local Control Pannel

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joo-Youl; Kim, Min-Soo; Kim, Kyung-Min; Lee, Jun-Kou [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The malfunction of feature in NPP could be occurred by physical aging, electrical false signal and natural disaster. The first recognition of malfunction is almost done by alarm system. Due to the importance of alarm system, design basis of alarm system is described in FSAR 18.1.4.20(alarm system design review). Operators can recognize malfunction of feature and importance of alarm in short distance. The sound of alarm is also changed depending on frequency so it contributes recognition of alarm. This system is not helpful in recognition of alarm for filed operators. In this study, the way that FSAR(priority of alarm and color indication) is also applied on local control is suggested. The alarm sound considering field situation, alarm name, status indication in circuit breaker are suggested to improve overall local control panel. These can contribute to safety operation. This paper is made from improvement items of local control panel in the sight of field operator. The research of local panel is necessary to apply these improvements and the collaboration of related department is also needed. In this study, The alarm sound considering field situation, alarm name, status indication in circuit breaker are suggested to improve overall local control panel based on Hanul Unit 6. If the improvement is applied, the qualitative effect of safe operation will be increased, and fatigue of work stress will be lower.

  3. How does experience modulate auditory spatial processing in individuals with blindness?

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  4. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    Science.gov (United States)

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in

  5. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  6. Inner-ear sound pressures near the base of the cochlea in chinchilla: Further investigation

    Science.gov (United States)

    Ravicz, Michael E.; Rosowski, John J.

    2013-01-01

    The middle-ear pressure gain GMEP, the ratio of sound pressure in the cochlear vestibule PV to sound pressure at the tympanic membrane PTM, is a descriptor of middle-ear sound transfer and the cochlear input for a given stimulus in the ear canal. GMEP and the cochlear partition differential pressure near the cochlear base ΔPCP, which determines the stimulus for cochlear partition motion and has been linked to hearing ability, were computed from simultaneous measurements of PV, PTM, and the sound pressure in scala tympani near the round window PST in chinchilla. GMEP magnitude was approximately 30 dB between 0.1 and 10 kHz and decreased sharply above 20 kHz, which is not consistent with an ideal transformer or a lossless transmission line. The GMEP phase was consistent with a roughly 50-μs delay between PV and PTM. GMEP was little affected by the inner-ear modifications necessary to measure PST. GMEP is a good predictor of ΔPCP at low and moderate frequencies where PV ⪢ PST but overestimates ΔPCP above a few kilohertz where PV ≈ PST. The ratio of PST to PV provides insight into the distribution of sound pressure within the cochlear scalae. PMID:23556590

  7. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  8. Students' Understanding of Genetics Concepts: The Effect of Reasoning Ability and Learning Approaches

    Science.gov (United States)

    Kiliç, Didem; Saglam, Necdet

    2014-01-01

    Students tend to learn genetics by rote and may not realise the interrelationships in daily life. Because reasoning abilities are necessary to construct relationships between concepts and rote learning impedes the students' sound understanding, it was predicted that having high level of formal reasoning and adopting meaningful learning orientation…

  9. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  10. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  11. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  12. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  13. Ionospheric Electron Densities at Mars: Comparison of Mars Express Ionospheric Sounding and MAVEN Local Measurement

    Czech Academy of Sciences Publication Activity Database

    Němec, F.; Morgan, D. D.; Fowler, C.M.; Kopf, A.J.; Andersson, L.; Gurnett, D. A.; Andrews, D.J.; Truhlík, Vladimír

    2017-01-01

    Roč. 122, č. 12 (2017), s. 12393-12405 E-ISSN 2169-9402 Institutional support: RVO:68378289 Keywords : Mars * ionosphere * MARSIS * Mars Express * MAVEN * radar sounding Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics OBOR OECD: Astronomy (including astrophysics,space science) http://onlinelibrary.wiley.com/doi/10.1002/2017JA024629/full

  14. Analysis of radiation fields in tomography on diffusion gaseous sound

    International Nuclear Information System (INIS)

    Bekman, I.N.

    1999-01-01

    Perspectives of application of equilibrium and stationary variants of diffusion tomography with radioactive gaseous sounds for spatial reconstruction of heterogeneous media in materials technology were considered. The basic attention were allocated to creation of simple algorithms of detection of sound accumulation on the background of monotonically varying concentration field. Algorithms of transformation of two-dimensional radiation field in three-dimensional distribution of radiation sources were suggested. The methods of analytical elongation of concentration field permitting separation of regional anomalies on the background of local ones and vice verse were discussed. It was shown that both equilibrium and stationary variants of diffusion tomography detect the heterogeneity of testing material, provide reduction of spatial distribution of elements of its structure and give an estimation of relative degree of defectiveness

  15. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  16. Spontaneous brain activity predicts learning ability of foreign sounds.

    Science.gov (United States)

    Ventura-Campos, Noelia; Sanjuán, Ana; González, Julio; Palomar-García, María-Ángeles; Rodríguez-Pujadas, Aina; Sebastián-Gallés, Núria; Deco, Gustavo; Ávila, César

    2013-05-29

    Can learning capacity of the human brain be predicted from initial spontaneous functional connectivity (FC) between brain areas involved in a task? We combined task-related functional magnetic resonance imaging (fMRI) and resting-state fMRI (rs-fMRI) before and after training with a Hindi dental-retroflex nonnative contrast. Previous fMRI results were replicated, demonstrating that this learning recruited the left insula/frontal operculum and the left superior parietal lobe, among other areas of the brain. Crucially, resting-state FC (rs-FC) between these two areas at pretraining predicted individual differences in learning outcomes after distributed (Experiment 1) and intensive training (Experiment 2). Furthermore, this rs-FC was reduced at posttraining, a change that may also account for learning. Finally, resting-state network analyses showed that the mechanism underlying this reduction of rs-FC was mainly a transfer in intrinsic activity of the left frontal operculum/anterior insula from the left frontoparietal network to the salience network. Thus, rs-FC may contribute to predict learning ability and to understand how learning modifies the functioning of the brain. The discovery of this correspondence between initial spontaneous brain activity in task-related areas and posttraining performance opens new avenues to find predictors of learning capacities in the brain using task-related fMRI and rs-fMRI combined.

  17. Migration patterns of post-spawning Pacific herring in a subarctic sound

    Science.gov (United States)

    Bishop, Mary Anne; Eiler, John H.

    2018-01-01

    Understanding the distribution of Pacific herring (Clupea pallasii) can be challenging because spawning, feeding and overwintering may take place in different areas separated by 1000s of kilometers. Along the northern Gulf of Alaska, Pacific herring movements after spring spawning are largely unknown. During the fall and spring, herring have been seen moving from the Gulf of Alaska into Prince William Sound, a large embayment, suggesting that fish spawning in the Sound migrate out into the Gulf of Alaska. We acoustic-tagged 69 adult herring on spawning grounds in Prince William Sound during April 2013 to determine seasonal migratory patterns. We monitored departures from the spawning grounds as well as herring arrivals and movements between the major entrances connecting Prince William Sound and the Gulf of Alaska. Departures of herring from the spawning grounds coincided with cessation of major spawning events in the immediate area. After spawning, 43 of 69 tagged herring (62%) moved to the entrances of Prince William Sound over a span of 104 d, although most fish arrived within 10 d of their departure from the spawning grounds. A large proportion remained in these areas until mid-June, most likely foraging on the seasonal bloom of large, Neocalanus copepods. Pulses of tagged herring detected during September and October at Montague Strait suggest that some herring returned from the Gulf of Alaska. Intermittent detections at Montague Strait and the Port Bainbridge passages from September through early January (when the transmitters expired) indicate that herring schools are highly mobile and are overwintering in this area. The pattern of detections at the entrances to Prince William Sound suggest that some herring remain in the Gulf of Alaska until late winter. The results of this study confirm the connectivity between local herring stocks in Prince William Sound and the Gulf of Alaska.

  18. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  19. Three integrated photovoltaic/sound barrier power plants. Construction and operational experience

    International Nuclear Information System (INIS)

    Nordmann, T.; Froelich, A.; Clavadetscher, L.

    2002-01-01

    After an international ideas competition by TNC Switzerland and Germany in 1996, six companies where given the opportunity to construct a prototype of their newly developed integrated PV-sound barrier concepts. The main goal was to develop highly integrated concepts, allowing the reduction of PV sound barrier systems costs, as well as the demonstration of specific concepts for different noise situations. This project is strongly correlated with a German project. Three of the concepts of the competition are demonstrated along a highway near Munich, constructed in 1997. The three Swiss installations had to be constructed at different locations, reflecting three typical situations for sound barriers. The first Swiss installation was the world first Bi-facial PV-sound barrier. It was built on a highway bridge at Wallisellen-Aubrugg in 1997. The operational experience of the installation is positive. But due to the different efficiencies of the two cell sides, its specific yield lies somewhat behind a conventional PV installation. The second Swiss plant was finished in autumn 1998. The 'zig-zag' construction is situated along the railway line at Wallisellen in a densely inhabited area with some local shadowing. Its performance and its specific yield is comparatively low due to a combination of several reasons (geometry of the concept, inverter, high module temperature, local shadows). The third installation was constructed along the motor way A1 at Bruettisellen in 1999. Its vertical panels are equipped with amorphous modules. The report show, that the performance of the system is reasonable, but the mechanical construction has to be improved. A small trial field with cells directly laminated onto the steel panel, also installed at Bruettisellen, could be the key development for this concept. This final report includes the evaluation and comparison of the monitored data in the past 24 months of operation. (author)

  20. In Conversation: David Brooks on Water Scarcity and Local-level ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-11-26

    Nov 26, 2010 ... While sound water management requires action from all levels, ... Local management is certainly an essential component in managing the world's water crisis. ... case studies that show the promise of local water management.

  1. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  2. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  3. Gefinex 400S (Sampo) EM-soundings at Olkiluoto 2008

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2008-09-01

    In the beginning of June 2008 Geological Survey of Finland (GTK) carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The same soundings sites were first time measured and marked in 2004 and has been repeated after it yearly in the same season. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. Because of the strong electromagnetic noise all planned sites (48) could not be measured. In 2008 the measurements were performed at the sites that were successful in 2007 (43 soundings). The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the signal/noise also with long coil separations and the repeatability of the results is reasonably good. However, most suitable for monitoring purposes are the sites without strong surficial 3D effects. Comparison of the results of 2004 to 2008 surveys shows differences on some ARD (Apparent resistivity-depth) curves. Those are mainly results of the modified man-made structures. The effects of changes in groundwater conditions are obviously slight. (orig.)

  4. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  5. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  6. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  7. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  8. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  9. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  10. Investigation of fourth sound propagation in HeII in the presence of superflow

    International Nuclear Information System (INIS)

    Andrei, Y.E.

    1980-01-01

    The temperature dependence of a superflow-induced downshift of the fourth sound velocity in HeII confined in various restrictive media was measured. We found that the magnitude of the downshift strongly depends on the restrictive medium, whereas the temperature dependence is universal. The results are interpreted in terms of local superflow velocities approaching the Landau critical velocity. This model provides and understanding of the nature of the downshift and correctly predicts temperature dependence. The results show that the Landau excitation model, even when used at high velocities, where interactions between elementary excitations are substantial, hield good agreement with experiment when a first order correction is introduced to account for these interactions. In a separate series of experiments, fourth sound-like propagation in HeII in a grafoil-filled resonator was observed. The sound velocity was found to be more than an order of magnitude smaller than that of ordinary fourth sound. This significant reduction is explained in terms of a model in which the pore structure in grafoil is pictured as an ensemble of coupled Helmholz resonators

  11. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  12. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  13. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  14. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  15. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  16. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes.

    Science.gov (United States)

    Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.

  17. Parent-administered computer-assisted tutoring targeting letter-sound knowledge: Evaluation via multiple-baseline across three preschool students.

    Science.gov (United States)

    DuBois, Matthew R; Volpe, Robert J; Burns, Matthew K; Hoffman, Jessica A

    2016-12-01

    Knowledge of letters sounds has been identified as a primary objective of preschool instruction and intervention. Despite this designation, large disparities exist in the number of letter sounds children know at school entry. Enhancing caregivers' ability to teach their preschool-aged children letter sounds may represent an effective practice for reducing this variability and ensuring that more children are prepared to experience early school success. This study used a non-concurrent multiple-baseline-across-participants design to evaluate the effectiveness of caregivers (N=3) delivering a computer-assisted tutoring program (Tutoring Buddy) targeting letter sound knowledge to their preschool-aged children. Visual analyses and effect size estimates derived from Percentage of All Non-Overlapping Data (PAND) statistics indicated consistent results for letter sound acquisition, as 6weeks of intervention yielded large effects for letter sound knowledge (LSK) across all three children. Large effect sizes were also found for letter sound fluency (LSF) and nonsense word fluency (NWF) for two children. All three caregivers rated the intervention as highly usable and were able to administer it with high levels of fidelity. Taken together, the results of the present study found Tutoring Buddy to be an effective, simple, and usable way for the caregivers to support their children's literacy development. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  18. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  19. Gefinex 400S (SAMPO) EM-soundings at Olkiluoto 2009

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.; Korhonen, K.

    2009-09-01

    In the beginning of June 2009 Geological Survey of Finland (GTK) carried out electromagnetic (EM) frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The EM-monitoring sounding program started in 2004 and has been repeated since yearly in the same season. The aim of the study is to monitor the variations of the groundwater properties down to 500 m depth by the changes of the electric conductivity of the earth at ONKALO and repository area. The original measurement grid was based on two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The receiver and transmitter sites are marked with stakes and the profiles were measured using 200, 500, and 800 m coil separations. The measurement program was revised in 2007 and then again in 2009. Now 15 noisy soundings were removed from the program and 3 new points were selected from the area to the east from ONKALO. The new receiver/transmitter sites, called ABC-points were marked with stakes and the points were measured using transmitter-receiver separations 200, 400 and 800 meters. In 2009 the new EM-Sampo monitoring program included 28+9 soundings. The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the SN (signal to noise) ratio and the repeatability of the results is reasonably good even with long coil separations. However, most suitable for monitoring purposes are the sites without strong shallow 3D effects. Comparison of the new results to old 2004-2008 surveys shows differences on some ARD (apparent resistivity-depth) curves. Those are mainly results of the modified shallow structures. The changes in groundwater conditions based on the monitoring results seem insignificant. (orig.)

  20. Alternative Paths to Hearing (A Conjecture. Photonic and Tactile Hearing Systems Displaying the Frequency Spectrum of Sound

    Directory of Open Access Journals (Sweden)

    E. H. Hara

    2006-01-01

    Full Text Available In this article, the hearing process is considered from a system engineering perspective. For those with total hearing loss, a cochlear implant is the only direct remedy. It first acts as a spectrum analyser and then electronically stimulates the neurons in the cochlea with a number of electrodes. Each electrode carries information on the separate frequency bands (i.e., spectrum of the original sound signal. The neurons then relay the signals in a parallel manner to the section of the brain where sound signals are processed. Photonic and tactile hearing systems displaying the spectrum of sound are proposed as alternative paths to the section of the brain that processes sound. In view of the plasticity of the brain, which can rewire itself, the following conjectures are offered. After a certain period of training, a person without the ability to hear should be able to decipher the patterns of photonic or tactile displays of the sound spectrum and learn to ‘hear’. This is very similar to the case of a blind person learning to ‘read’ by recognizing the patterns created by the series of bumps as their fingers scan the Braille writing. The conjectures are yet to be tested. Designs of photonic and tactile systems displaying the sound spectrum are outlined.

  1. Chronic scream sound exposure alters memory and monoamine levels in female rat brain.

    Science.gov (United States)

    Hu, Lili; Zhao, Xiaoge; Yang, Juan; Wang, Lumin; Yang, Yang; Song, Tusheng; Huang, Chen

    2014-10-01

    Chronic scream sound alters the cognitive performance of male rats and their brain monoamine levels, these stress-induced alterations are sexually dimorphic. To determine the effects of sound stress on female rats, we examined their serum corticosterone levels and their adrenal, splenic, and thymic weights, their cognitive performance and the levels of monoamine neurotransmitters and their metabolites in the brain. Adult female Sprague-Dawley rats, with and without exposure to scream sound (4h/day for 21 day) were tested for spatial learning and memory using a Morris water maze. Stress decreased serum corticosterone levels, as well as splenic and adrenal weight. It also impaired spatial memory but did not affect the learning ability. Monoamines and metabolites were measured in the prefrontal cortex (PFC), striatum, hypothalamus, and hippocampus. The dopamine (DA) levels in the PFC decreased but the homovanillic acid/DA ratio increased. The decreased DA and the increased 5-hydroxyindoleacetic acid (5-HIAA) levels were observed in the striatum. Only the 5-HIAA level increased in the hypothalamus. In the hippocampus, stress did not affect the levels of monoamines and metabolites. The results suggest that scream sound stress influences most physiologic parameters, memory, and the levels of monoamine neurotransmitter and their metabolites in female rats. Copyright © 2014. Published by Elsevier Inc.

  2. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  3. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  4. Digital stethoscopes compared to standard auscultation for detecting abnormal paediatric breath sounds.

    Science.gov (United States)

    Kevat, Ajay C; Kalirajah, Anaath; Roseby, Robert

    2017-07-01

    Our study aimed to objectively describe the audiological characteristics of wheeze and crackles in children by using digital stethoscope (DS) auscultation, as well as assess concordance between standard auscultation and two different DS devices in their ability to detect pathological breath sounds. Twenty children were auscultated by a paediatric consultant doctor and digitally recorded using the Littman™ 3200 Digital Electronic Stethoscope and a Clinicloud™ DS with smart device. Using spectrographic analysis, we found those with clinically described wheeze had prominent periodic waveform segments spanning expiration for a period of 0.03-1.2 s at frequencies of 100-1050 Hz, and occasionally spanning shorter inspiratory segments; paediatric crackles were brief discontinuous sounds with a distinguishing waveform. There was moderate concordance with respect to wheeze detection between digital and standard binaural stethoscopes, and 100% concordance for crackle detection. Importantly, DS devices were more sensitive than clinician auscultation in detecting wheeze in our study. Objective definition of audio characteristics of abnormal paediatric breath sounds was achieved using DS technology. We demonstrated superiority of our DS method compared to traditional auscultation for detection of wheeze. What is Known: • The audiological characteristics of abnormal breath sounds have been well-described in adult populations but not in children. • Inter-observer agreement for detection of pathological breath sounds using standard auscultation has been shown to be poor, but the clinical value of now easily available digital stethoscopes has not been sufficiently examined. What is New: • Digital stethoscopes can objectively define the nature of pathological breath sounds such as wheeze and crackles in children. • Paediatric wheeze was better detected by digital stethoscopes than by standard auscultation performed by an expert paediatric clinician.

  5. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception

    Directory of Open Access Journals (Sweden)

    Emily B. J. Coffey

    2017-08-01

    Full Text Available Speech-in-noise (SIN perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  6. Examining Word Factors and Child Factors for Acquisition of Conditional Sound-Spelling Consistencies: A Longitudinal Study

    Science.gov (United States)

    Kim, Young-Suk Grace; Petscher, Yaacov; Park, Younghee

    2016-01-01

    It has been suggested that children acquire spelling by picking up conditional sound-spelling consistencies. To examine this hypothesis, we investigated how variation in word characteristics (words that vary systematically in terms of phoneme-grapheme correspondences) and child factors (individual differences in the ability to extract…

  7. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  8. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  9. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  10. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  11. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  12. Localização especial de estímulos sonoros em indivíduos cegos congênitos: estudo comparativo da posição tridimensional da cabeça em adultos cegos congênitos e indivíduos videntes Spatial localization of sounds in blind individuals: comparative study of three dimensional position of the head in blind and non blind adults

    Directory of Open Access Journals (Sweden)

    Juliana Gonçalves da Silva Gerente

    2008-04-01

    Full Text Available A capacidade para localizar objetos fixos ou em movimento no espaço tridimensional depende da função visual. No indivíduo cego, as modalidades sensoriais remanescentes, nomeadamente a audição, poderiam compensar a visão na localização espacial. O objetivo deste estudo foi analisar o papel da audição no mecanismo de localização espacial por meio da habilidade de orientar de forma precisa a cabeça face à fonte sonora. Cinco adultos cegos congênitos foram comparados com cinco sujeitos videntes vendados. A tarefa consistiu na orientação da cabeça ao estímulo sonoro, emitido por sete fontes diferentes, com localização fixa. A posição tridimensional da cabeça e tronco foi registrada por um sistema de varredura eletromagnético (Flock of Birds System. Para cada som produzido foi calculado o "erro de localização". Esta medida correspondeu à diferença entre o registro da posição obtido durante o teste e durante uma posição de controle. Os resultados revelaram que nos indivíduos cegos congênitos a magnitude de erro de localização dos estímulos auditivos foi superior aos indivíduos videntes. Conclui-se que a representação mental formada com base na visão constitui um dos pré-requisitos para um bom desempenho nas tarefas espaciais.The ability to locate stationary or moving objects in space depends on visual function. It is thought that for blind individuals, the remaining sensory modalities, in particular hearing, will compensate for the absence of vision in spatial localization. This study aimed to analyze the role of hearing on the spatial localization mechanism by looking at the ability to accurately direct the head to the source of sound. Five congenitally blind adults were compared to five sighted people who wore blindfolds. The task consisted of turning the head toward the sound stimulus, coming from seven different fixed point sources. The three dimensional position of the head and trunk was registered by an

  13. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  14. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  15. Very low sound velocities in iron-rich (Mg,Fe)O: Implications for the core-mantle boundary region

    International Nuclear Information System (INIS)

    Wicks, J.K.; Jackson, J.M.; Sturhahn, W.

    2010-01-01

    The sound velocities of (Mg .16 Fe .84 )O have been measured to 121 GPa at ambient temperature using nuclear resonant inelastic x-ray scattering. The effect of electronic environment of the iron sites on the sound velocities were tracked in situ using synchrotron Moessbauer spectroscopy. We found the sound velocities of (Mg .16 Fe .84 )O to be much lower than those in other presumed mantle phases at similar conditions, most notably at very high pressures. Conservative estimates of the effect of temperature and dilution on aggregate sound velocities show that only a small amount of iron-rich (Mg,Fe)O can greatly reduce the average sound velocity of an assemblage. We propose that iron-rich (Mg,Fe)O be a source of ultra-low velocity zones. Other properties of this phase, such as enhanced density and dynamic stability, strongly support the presence of iron-rich (Mg,Fe)O in localized patches above the core-mantle boundary.

  16. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  17. Might as well jump: sound affects muscle activation in skateboarding.

    Directory of Open Access Journals (Sweden)

    Paola Cesari

    Full Text Available The aim of the study is to reveal the role of sound in action anticipation and performance, and to test whether the level of precision in action planning and execution is related to the level of sensorimotor skills and experience that listeners possess about a specific action. Individuals ranging from 18 to 75 years of age--some of them without any skills in skateboarding and others experts in this sport--were compared in their ability to anticipate and simulate a skateboarding jump by listening to the sound it produces. Only skaters were able to modulate the forces underfoot and to apply muscle synergies that closely resembled the ones that a skater would use if actually jumping on a skateboard. More importantly we showed that only skaters were able to plan the action by activating anticipatory postural adjustments about 200 ms after the jump event. We conclude that expert patterns are guided by auditory events that trigger proper anticipations of the corresponding patterns of movements.

  18. Might as well jump: sound affects muscle activation in skateboarding.

    Science.gov (United States)

    Cesari, Paola; Camponogara, Ivan; Papetti, Stefano; Rocchesso, Davide; Fontana, Federico

    2014-01-01

    The aim of the study is to reveal the role of sound in action anticipation and performance, and to test whether the level of precision in action planning and execution is related to the level of sensorimotor skills and experience that listeners possess about a specific action. Individuals ranging from 18 to 75 years of age--some of them without any skills in skateboarding and others experts in this sport--were compared in their ability to anticipate and simulate a skateboarding jump by listening to the sound it produces. Only skaters were able to modulate the forces underfoot and to apply muscle synergies that closely resembled the ones that a skater would use if actually jumping on a skateboard. More importantly we showed that only skaters were able to plan the action by activating anticipatory postural adjustments about 200 ms after the jump event. We conclude that expert patterns are guided by auditory events that trigger proper anticipations of the corresponding patterns of movements.

  19. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  20. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  1. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  2. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  3. Laminar differences in response to simple and spectro-temporally complex sounds in the primary auditory cortex of ketamine-anesthetized gerbils.

    Directory of Open Access Journals (Sweden)

    Markus K Schaefer

    Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.

  4. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    Science.gov (United States)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  5. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  6. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  7. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  8. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  9. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  10. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  11. Movement and Perceptual Strategies to Intercept Virtual Sound Sources.

    Directory of Open Access Journals (Sweden)

    Naeem eKomeilipoor

    2015-05-01

    Full Text Available To intercept a moving object, one needs to be in the right place at the right time. In order to do this, it is necessary to pick up and use perceptual information that specifies the time to arrival of an object at an interception point. In the present study, we examined the ability to intercept a laterally moving virtual sound object by controlling the displacement of a sliding handle and tested whether and how the interaural time difference (ITD could be the main source of perceptual information for successfully intercepting the virtual object. The results revealed that in order to accomplish the task, one might need to vary the duration of the movement, control the hand velocity and time to reach the peak velocity (speed coupling, while the adjustment of movement initiation did not facilitate performance. Furthermore, the overall performance was more successful when subjects employed a time-to-contact (tau coupling strategy. This result shows that prospective information is available in sound for guiding goal-directed actions.

  12. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  13. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  14. Controversial Embodiment: Sport, Masculinity, Dis/Ability

    Directory of Open Access Journals (Sweden)

    Marilena Parlati

    2015-11-01

    Full Text Available This essay is an attempt at investigating visible forms of complex, indeed controversial embodiment, with the specific intention of concentrating on the ways they interrogate delicate issues such as disability, masculinity and prosthetic sport performance. I intend to sound the shifting boundaries between dis-ability and super-ability as manifested in iconic figures such as Stelarc and, in other fields, Oscar Pistorius, whose unsteady position as privileged/disabled bladerunner seems to require – and indeed to gather – particularly intense scrutiny. I shall introduce a few contemporary discourses on corporeality and embodiment, which focus on the ‘troubling’ nature of auxiliary organs Freud refers to in the much contended paragraph I adopt as epigraph and guiding procedural light; I shall move from Butler and Giddens to Jean-Luc Nancy’s work on transplants and/as prostheses to include theoretical debates on disintegrating embodiment and disability studies, in order to proceed towards an analysis of the short-circuiting of allegedly secure practices of (masculine embodiment in sport culture and theory.

  15. Seafloor environments in the Long Island Sound estuarine system

    Science.gov (United States)

    Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.

    1999-01-01

    broad areas of the basin floor in the western part of the Sound. The regional distribution of seafloor environments reflects fundamental differences in marine-geologic conditions between the eastern and western parts of the Sound. In the funnel-shaped eastern part, a gradient of strong tidal currents coupled with the net nontidal (estuarine) bottom drift produce a westward progression of environments ranging from erosion or nondeposition at the narrow entrance to the Sound, through an extensive area of bedload transport, to a peripheral zone of sediment sorting. In the generally broader western part of the Sound, a weak tidal-current regime combined with the production of particle aggregates by biologic or chemical processes, cause large areas of deposition that are locally interrupted by a patchy distribution of various other environments where the bottom currents are enhanced by and interact with the seafloor topography.

  16. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  17. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    Science.gov (United States)

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  18. Synchronized tapping facilitates learning sound sequences as indexed by the P300.

    Science.gov (United States)

    Kamiyama, Keiko S; Okanoya, Kazuo

    2014-01-01

    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.

  19. Speed of sound in hadronic matter using non-extensive statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Jean Cleymans

    2015-01-01

    The evolution of the dense matter formed in high energy hadronic and nuclear collisions is controlled by the initial energy density and temperature. The expansion of the system is due to the very high initial pressure with lowering of temperature and energy density. The pressure (P) and energy density (ϵ) are related through speed of sound (c 2 s ) under the condition of local thermal equilibrium. The speed of sound plays a crucial role in hydrodynamical expansion of the dense matter created and the critical behaviour of the system evolving from deconfined Quark Gluon Phase (QGP) to confined hadronic phase. There have been several experimental and theoretical studies in this direction. The non-extensive Tsallis statistics gives better description of the transverse momentum spectra of the produced particles created in high energy p + p (p¯) and e + + e - collisions

  20. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  1. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  2. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  3. Musical, language and reading abilities in early Portuguese readers

    Directory of Open Access Journals (Sweden)

    Jennifer eZuk

    2013-06-01

    Full Text Available Early language and reading abilities have been shown to correlate with a variety of musical skills and elements of music perception in children. It has also been shown that reading impaired children can show difficulties with music perception. However, it is still unclear to what extent different aspects of music perception are associated with language and reading abilities. Here we investigated the relationship between cognitive-linguistic abilities and a music discrimination task that preserves an ecologically valid musical experience. Forty-three Portuguese-speaking students from an elementary school in Brazil participated in this study. Children completed a comprehensive cognitive-linguistic battery of assessments. The music task was presented live in the music classroom, and children were asked to code sequences of four sounds on the guitar. Results show a strong relationship between performance on the music task and a number of linguistic variables. A Principle Component Analysis of the cognitive-linguistic battery revealed that the strongest component (Prin1 accounted for 33% of the variance and Prin1 was significantly related to the music task. Highest loadings on Prin1 were found for reading measures such as Reading Speed and Reading Accuracy. Interestingly, twenty-two children recorded responses for more than four sounds within a trial on the music task, which was classified as Superfluous Responses (SR. SR was negatively correlated with a variety of linguistic variables and showed a negative correlation with Prin1. When analyzing children with and without SR separately, only children with SR showed a significant correlation between Prin1 and the music task. Our results provide implications for the use of an ecologically valid music-based screening tool for the early identification of reading disabilities in a classroom setting.

  4. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  5. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  6. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  7. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  8. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  9. Development of inquiry-based learning activities integrated with the local learning resource to promote learning achievement and analytical thinking ability of Mathayomsuksa 3 student

    Science.gov (United States)

    Sukji, Paweena; Wichaidit, Pacharee Rompayom; Wichaidit, Sittichai

    2018-01-01

    The objectives of this study were to: 1) compare learning achievement and analytical thinking ability of Mathayomsuksa 3 students before and after learning through inquiry-based learning activities integrated with the local learning resource, and 2) compare average post-test score of learning achievement and analytical thinking ability to its cutting score. The target of this study was 23 Mathayomsuksa 3 students who were studying in the second semester of 2016 academic year from Banchatfang School, Chainat Province. Research instruments composed of: 1) 6 lesson plans of Environment and Natural Resources, 2) the learning achievement test, and 3) analytical thinking ability test. The results showed that 1) student' learning achievement and analytical thinking ability after learning were higher than that of before at the level of .05 statistical significance, and 2) average posttest score of student' learning achievement and analytical thinking ability were higher than its cutting score at the level of .05 statistical significance. The implication of this research is for science teachers and curriculum developers to design inquiry activities that relate to student's context.

  10. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  11. Optimization of sound absorbing performance for gradient multi-layer-assembled sintered fibrous absorbers

    Science.gov (United States)

    Zhang, Bo; Zhang, Weiyong; Zhu, Jian

    2012-04-01

    The transfer matrix method, based on plane wave theory, of multi-layer equivalent fluid is employed to evaluate the sound absorbing properties of two-layer-assembled and three-layer-assembled sintered fibrous sheets (generally regarded as a kind of compound absorber or structures). Two objective functions which are more suitable for the optimization of sound absorption properties of multi-layer absorbers within the wider frequency ranges are developed and the optimized results of using two objective functions are also compared with each other. It is found that using the two objective functions, especially the second one, may be more helpful to exert the sound absorbing properties of absorbers at lower frequencies to the best of their abilities. Then the calculation and optimization of sound absorption properties of multi-layer-assembled structures are performed by developing a simulated annealing genetic arithmetic program and using above-mentioned objective functions. Finally, based on the optimization in this work the thoughts of the gradient design over the acoustic parameters- the porosity, the tortuosity, the viscous and thermal characteristic lengths and the thickness of each samples- of porous metals are put forth and thereby some useful design criteria upon the acoustic parameters of each layer of porous fibrous metals are given while applying the multi-layer-assembled compound absorbers in noise control engineering.

  12. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  13. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  14. Deltas, freshwater discharge, and waves along the Young Sound, NE Greenland

    DEFF Research Database (Denmark)

    Kroon, Aart; Abermann, Jakob; Bendixen, Mette

    2017-01-01

    , and bathymetry), fluvial discharges and associated sediment load, and processes by waves and currents. Main factors steering the Arctic fluvial discharges into the Young Sound are the snow and ice melt and precipitation in the catchment, and extreme events like glacier lake outburst floods (GLOFs). Waves......A wide range of delta morphologies occurs along the fringes of the Young Sound in Northeast Greenland due to spatial heterogeneity of delta regimes. In general, the delta regime is related to catchment and basin characteristics (geology, topography, drainage pattern, sediment availability...... are subordinate and only rework fringes of the delta plain forming sandy bars if the exposure and fetch are optimal. Spatial gradients and variability in driving forces (snow and precipitation) and catchment characteristics (amount of glacier coverage, sediment characteristics) as well as the strong and local...

  15. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  16. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  17. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  18. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  19. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  20. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  1. Parallel-plate third sound waveguides with fixed and variable plate spacings for the study of fifth sound in superfluid helium

    International Nuclear Information System (INIS)

    Jelatis, G.J.

    1983-01-01

    Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance

  2. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  3. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  4. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  5. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  6. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  7. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  8. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  9. 21 CFR 876.4590 - Interlocking urethral sound.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...

  10. Extended abstracts from the Coastal Habitats in Puget Sound (CHIPS) 2006 Workshop

    Science.gov (United States)

    Gelfenbaum, Guy R.; Fuentes, Tracy L.; Duda, Jeffrey J.; Grossman, Eric E.; Takesue, Renee K.

    2010-01-01

    Puget Sound is the second largest estuary in the United States. Its unique geology, climate, and nutrient-rich waters produce and sustain biologically productive coastal habitats. These same natural characteristics also contribute to a high quality of life that has led to a significant growth in human population and associated development. This population growth, and the accompanying rural and urban development, has played a role in degrading Puget Sound ecosystems, including declines in fish and wildlife populations, water-quality issues, and loss and degradation of coastal habitats.In response to these ecosystem declines and the potential for strategic large-scale preservation and restoration, a coalition of local, State, and Federal agencies, including the private sector, Tribes, and local universities, initiated the Puget Sound Nearshore Ecosystem Restoration Project (PSNERP). The Nearshore Science Team (NST) of PSNERP, along with the U.S. Geological Survey, developed a Science Strategy and Research Plan (Gelfenbaum and others, 2006) to help guide science activities associated with nearshore ecosystem restoration. Implementation of the Research Plan includes a call for State and Federal agencies to direct scientific studies to support PSNERP information needs. In addition, the overall Science Strategy promotes greater communication with decision makers and dissemination of scientific results to the broader scientific community.On November 14–16, 2006, the U.S. Geological Survey sponsored an interdisciplinary Coastal Habitats in Puget Sound (CHIPS) Research Workshop at Fort Worden State Park, Port Townsend, Washington. The main goals of the workshop were to coordinate, integrate, and link research on the nearshore of Puget Sound. Presented research focused on three themes: (1) restoration of large river deltas; (2) recovery of the nearshore ecosystem of the Elwha River; and (3) effects of urbanization on nearshore ecosystems. The more than 35 presentations

  11. Mapping saltwater intrusion in the Biscayne Aquifer, Miami-Dade County, Florida using transient electromagnetic sounding

    Science.gov (United States)

    Fitterman, David V.

    2014-01-01

    Saltwater intrusion in southern Florida poses a potential threat to the public drinking-water supply that is typically monitored using water samples and electromagnetic induction logs collected from a network of wells. Transient electromagnetic (TEM) soundings are a complementary addition to the monitoring program because of their ease of use, low cost, and ability to fill in data gaps between wells. TEM soundings have been used to map saltwater intrusion in the Biscayne aquifer over a large part of south Florida including eastern Miami-Dade County and the Everglades. These two areas are very different with one being urban and the other undeveloped. Each poses different conditions that affect data collection and data quality. In the developed areas, finding sites large enough to make soundings is difficult. The presence of underground pipes further restricts useable locations. Electromagnetic noise, which reduces data quality, is also an issue. In the Everglades, access to field sites is difficult and working in water-covered terrain is challenging. Nonetheless, TEM soundings are an effective tool for mapping saltwater intrusion. Direct estimates of water quality can be obtained from the inverted TEM data using a formation factor determined for the Biscayne aquifer. This formation factor is remarkably constant over Miami-Dade County owing to the uniformity of the aquifer and the absence of clay. Thirty-six TEM soundings were collected in the Model Land area of southeast Miami-Dade County to aid in calibration of a helicopter electromagnetic (HEM) survey. The soundings and HEM survey revealed an area of saltwater intrusion aligned with canals and drainage ditches along U.S. Highway 1 and the Card Sound Road. These canals and ditches likely reduced freshwater levels through unregulated drainage and provided pathways for seawater to flow at least 12.4 km inland.

  12. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    Science.gov (United States)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  13. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  14. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  15. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  16. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  17. Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity.

    Science.gov (United States)

    Miller, Lee M; Recanzone, Gregg H

    2009-04-07

    The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.

  18. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  19. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  20. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  1. Sound production to electric discharge: sonic muscle evolution in progress in Synodontis spp. catfishes (Mochokidae).

    Science.gov (United States)

    Boyle, Kelly S; Colleye, Orphal; Parmentier, Eric

    2014-09-22

    Elucidating the origins of complex biological structures has been one of the major challenges of evolutionary studies. Within vertebrates, the capacity to produce regular coordinated electric organ discharges (EODs) has evolved independently in different fish lineages. Intermediate stages, however, are not known. We show that, within a single catfish genus, some species are able to produce sounds, electric discharges or both signals (though not simultaneously). We highlight that both acoustic and electric communication result from actions of the same muscle. In parallel to their abilities, the studied species show different degrees of myofibril development in the sonic and electric muscle. The lowest myofibril density was observed in Synodontis nigriventris, which produced EODs but no swim bladder sounds, whereas the greatest myofibril density was observed in Synodontis grandiops, the species that produced the longest sound trains but did not emit EODs. Additionally, S. grandiops exhibited the lowest auditory thresholds. Swim bladder sounds were similar among species, while EODs were distinctive at the species level. We hypothesize that communication with conspecifics favoured the development of species-specific EOD signals and suggest an evolutionary explanation for the transition from a fast sonic muscle to electrocytes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Floquet topological insulators for sound

    Science.gov (United States)

    Fleury, Romain; Khanikaev, Alexander B.; Alù, Andrea

    2016-06-01

    The unique conduction properties of condensed matter systems with topological order have recently inspired a quest for the similar effects in classical wave phenomena. Acoustic topological insulators, in particular, hold the promise to revolutionize our ability to control sound, allowing for large isolation in the bulk and broadband one-way transport along their edges, with topological immunity against structural defects and disorder. So far, these fascinating properties have been obtained relying on moving media, which may introduce noise and absorption losses, hindering the practical potential of topological acoustics. Here we overcome these limitations by modulating in time the acoustic properties of a lattice of resonators, introducing the concept of acoustic Floquet topological insulators. We show that acoustic waves provide a fertile ground to apply the anomalous physics of Floquet topological insulators, and demonstrate their relevance for a wide range of acoustic applications, including broadband acoustic isolation and topologically protected, nonreciprocal acoustic emitters.

  3. Cochlear Neuropathy and the Coding of Supra-threshold Sound

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds, behavioral ability in paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation, correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in subcortical steady-state responses in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  4. Reef Sound as an Orientation Cue for Shoreward Migration by Pueruli of the Rock Lobster, Jasus edwardsii.

    Science.gov (United States)

    Hinojosa, Ivan A; Green, Bridget S; Gardner, Caleb; Hesse, Jan; Stanley, Jenni A; Jeffs, Andrew G

    2016-01-01

    The post-larval or puerulus stage of spiny, or rock, lobsters (Palinuridae) swim many kilometres from open oceans into coastal waters where they subsequently settle. The orientation cues used by the puerulus for this migration are unclear, but are presumed to be critical to finding a place to settle. Understanding this process may help explain the biological processes of dispersal and settlement, and be useful for developing realistic dispersal models. In this study, we examined the use of reef sound as an orientation cue by the puerulus stage of the southern rock lobster, Jasus edwardsii. Experiments were conducted using in situ binary choice chambers together with replayed recording of underwater reef sound. The experiment was conducted in a sandy lagoon under varying wind conditions. A significant proportion of puerulus (69%) swam towards the reef sound in calm wind conditions. However, in windy conditions (>25 m s-1) the orientation behaviour appeared to be less consistent with the inclusion of these results, reducing the overall proportion of pueruli that swam towards the reef sound (59.3%). These results resolve previous speculation that underwater reef sound is used as an orientation cue in the shoreward migration of the puerulus of spiny lobsters, and suggest that sea surface winds may moderate the ability of migrating pueruli to use this cue to locate coastal reef habitat to settle. Underwater sound may increase the chance of successful settlement and survival of this valuable species.

  5. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  6. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  7. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  8. Sounds in one-dimensional superfluid helium

    International Nuclear Information System (INIS)

    Um, C.I.; Kahng, W.H.; Whang, E.H.; Hong, S.K.; Oh, H.G.; George, T.F.

    1989-01-01

    The temperature variations of first-, second-, and third-sound velocity and attenuation coefficients in one-dimensional superfluid helium are evaluated explicitly for very low temperatures and frequencies (ω/sub s/tau 2 , and the ratio of second sound to first sound becomes unity as the temperature decreases to absolute zero

  9. Measurement-based local quantum filters and their ability to ...

    Indian Academy of Sciences (India)

    Debmalya Das

    2017-05-30

    May 30, 2017 ... Entanglement; local filters; quantum measurement. PACS No. 03.65 ... ties [4,5], it also plays a key role in quantum computing where it is ... Furthermore, we pro- vide an ..... Corresponding to each of these vectors, we can con-.

  10. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  11. The stability of second sound waves in a rotating Darcy–Brinkman porous layer in local thermal non-equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Eltayeb, I A; Elbashir, T B A, E-mail: ieltayeb@squ.edu.om, E-mail: elbashir@squ.edu.om [Department of Mathematics and Statistics, College of Science, Sultan Qaboos University, Muscat 123 (Oman)

    2017-08-15

    The linear and nonlinear stabilities of second sound waves in a rotating porous Darcy–Brinkman layer in local thermal non-equilibrium are studied when the heat flux in the solid obeys the Cattaneo law. The simultaneous action of the Brinkman effect (effective viscosity) and rotation is shown to destabilise the layer, as compared to either of them acting alone, for both stationary and overstable modes. The effective viscosity tends to favour overstable modes while rotation tends to favour stationary convection. Rapid rotation invokes a negative viscosity effect that suppresses the stabilising effect of porosity so that the stability characteristics resemble those of the classical rotating Benard layer. A formal weakly nonlinear analysis yields evolution equations of the Landau–Stuart type governing the slow time development of the amplitudes of the unstable waves. The equilibrium points of the evolution equations are analysed and the overall development of the amplitudes is examined. Both overstable and stationary modes can exhibit supercritical stability; supercritical instability, subcritical instability and stability are not possible. The dependence of the supercritical stability on the relative values of the six dimensionless parameters representing thermal non-equilibrium, rotation, porosity, relaxation time, thermal diffusivities and Brinkman effect is illustrated as regions in regime diagrams in the parameter space. The dependence of the heat transfer and the mean heat flux on the parameters of the problem is also discussed. (paper)

  12. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  13. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  14. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  15. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  16. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  17. WODA Technical Guidance on Underwater Sound from Dredging.

    Science.gov (United States)

    Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders

    2016-01-01

    The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.

  18. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  19. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  20. Encoding audio motion: spatial impairment in early blind individuals

    Directory of Open Access Journals (Sweden)

    Sara eFinocchietti

    2015-09-01

    Full Text Available The consequence of blindness on auditory spatial localization has been an interesting issue of research in the last decade providing mixed results. Enhanced auditory spatial skills in individuals with visual impairment have been reported by multiple studies, while some aspects of spatial hearing seem to be impaired in the absence of vision. In this study, the ability to encode the trajectory of a 2 dimensional sound motion, reproducing the complete movement, and reaching the correct end-point sound position, is evaluated in 12 early blind individuals, 8 late blind individuals, and 20 age-matched sighted blindfolded controls. Early blind individuals correctly determine the direction of the sound motion on the horizontal axis, but show a clear deficit in encoding the sound motion in the lower side of the plane. On the contrary, late blind individuals and blindfolded controls perform much better with no deficit in the lower side of the plane. In fact the mean localization error resulted 271 ± 10 mm for early blind individuals, 65 ± 4 mm for late blind individuals, and 68 ± 2 mm for sighted blindfolded controls.These results support the hypothesis that i it exists a trade-off between the development of enhanced perceptual abilities and role of vision in the sound localization abilities of early blind individuals, and ii the visual information is fundamental in calibrating some aspects of the representation of auditory space in the brain.

  1. On the sound absorption coefficient of porous asphalt pavements for oblique incident sound waves

    NARCIS (Netherlands)

    Bezemer-Krijnen, Marieke; Wijnant, Ysbrand H.; de Boer, Andries; Bekke, Dirk; Davy, J.; Don, Ch.; McMinn, T.; Dowsett, L.; Broner, N.; Burgess, M.

    2014-01-01

    A rolling tyre will radiate noise in all directions. However, conventional measurement techniques for the sound absorption of surfaces only give the absorption coefficient for normal incidence. In this paper, a measurement technique is described with which it is possible to perform in situ sound

  2. Whose Line Sound is it Anyway? Identifying the Vocalizer on Underwater Video by Localizing with a Hydrophone Array

    Directory of Open Access Journals (Sweden)

    Matthias Hoffmann-Kuhnt

    2016-11-01

    Full Text Available A new device that combined high-resolution (1080p wide-angle video and three channels of high-frequency acoustic recordings (at 500 kHz per channel in a portable underwater housing was designed and tested with wild bottlenose and spotted dolphins in the Bahamas. It consisted of three hydrophones, a GoPro camera, a small Fit PC, a set of custom preamplifiers and a high-frequency data acquisition board. Recordings were obtained to identify individual vocalizing animals through time-delay-of-arrival localizing in post-processing. The calculated source positions were then overlaid onto the video – providing the ability to identify the vocalizing animal on the recorded video. The new tool allowed for much clearer analysis of the acoustic behavior of cetaceans than was possible before.

  3. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  4. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  5. Estimation of probability of coastal flooding: A case study in the Norton Sound, Alaska

    Science.gov (United States)

    Kim, S.; Chapman, R. S.; Jensen, R. E.; Azleton, M. T.; Eisses, K. J.

    2010-12-01

    Along the Norton Sound, Alaska, coastal communities have been exposed to flooding induced by the extra-tropical storms. Lack of observation data especially with long-term variability makes it difficult to assess the probability of coastal flooding critical in planning for development and evacuation of the coastal communities. We estimated the probability of coastal flooding with the help of an existing storm surge model using ADCIRC and a wave model using WAM for the Western Alaska which includes the Norton Sound as well as the adjacent Bering Sea and Chukchi Sea. The surface pressure and winds as well as ice coverage was analyzed and put in a gridded format with 3 hour interval over the entire Alaskan Shelf by Ocean Weather Inc. (OWI) for the period between 1985 and 2009. The OWI also analyzed the surface conditions for the storm events over the 31 year time period between 1954 and 1984. The correlation between water levels recorded by NOAA tide gage and local meteorological conditions at Nome between 1992 and 2005 suggested strong local winds with prevailing Southerly components period are good proxies for high water events. We also selected heuristically the local winds with prevailing Westerly components at Shaktoolik which locates at the eastern end of the Norton Sound provided extra selection of flood events during the continuous meteorological data record between 1985 and 2009. The frequency analyses were performed using the simulated water levels and wave heights for the 56 year time period between 1954 and 2009. Different methods of estimating return periods were compared including the method according to FEMA guideline, the extreme value statistics, and fitting to the statistical distributions such as Weibull and Gumbel. The estimates are similar as expected but with a variation.

  6. Measurement-based local quantum filters and their ability to ...

    Indian Academy of Sciences (India)

    Debmalya Das

    Berhampur (Transit Campus), National Highway 59, Berhampur 760 010, India. ∗. Corresponding author. E-mail: arvind@iisermohali.ac.in. MS received 29 July 2016; revised 21 October 2016; accepted 16 December 2016; published online 30 May 2017. Abstract. We introduce local filters as a means to detect the ...

  7. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  8. High frequency ion sound waves associated with Langmuir waves in type III radio burst source regions

    Directory of Open Access Journals (Sweden)

    G. Thejappa

    2004-01-01

    Full Text Available Short wavelength ion sound waves (2-4kHz are detected in association with the Langmuir waves (~15-30kHz in the source regions of several local type III radio bursts. They are most probably not due to any resonant wave-wave interactions such as the electrostatic decay instability because their wavelengths are much shorter than those of Langmuir waves. The Langmuir waves occur as coherent field structures with peak intensities exceeding the Langmuir collapse thresholds. Their scale sizes are of the order of the wavelength of an ion sound wave. These Langmuir wave field characteristics indicate that the observed short wavelength ion sound waves are most probably generated during the thermalization of the burnt-out cavitons left behind by the Langmuir collapse. Moreover, the peak intensities of the observed short wavelength ion sound waves are comparable to the expected intensities of those ion sound waves radiated by the burnt-out cavitons. However, the speeds of the electron beams derived from the frequency drift of type III radio bursts are too slow to satisfy the needed adiabatic ion approximation. Therefore, some non-linear process such as the induced scattering on thermal ions most probably pumps the beam excited Langmuir waves towards the lower wavenumbers, where the adiabatic ion approximation is justified.

  9. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  10. Differences in binaural interaction at low and high frequencies

    NARCIS (Netherlands)

    Par, van de S.L.J.D.E.; Kohlrausch, A.G.

    1993-01-01

    Differences in the acoustic signals received by both ears form the major basis of our ability to localize sound sources. They can also help in detecting a signal against a background of interfering (masking) sounds. A quantitative measure for this 'binaural advantage' is the Binaural Masking Level

  11. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  12. A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling

    Science.gov (United States)

    Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit

    2014-09-01

    When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.

  13. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  14. Are you a good mimic? Neuro-acoustic signatures for speech imitation ability

    Directory of Open Access Journals (Sweden)

    Susanne Maria Reiterer

    2013-10-01

    Full Text Available We investigated individual differences in speech imitation ability in late bilinguals using a neuro-acoustic approach. 138 German-English bilinguals matched on various behavioral measures were tested for speech imitation ability in a foreign language, Hindi, and categorised into high and low ability groups. Brain activations and speech recordings were obtained from 26 participants from the two extreme groups as they performed a functional neuroimaging experiment which required them to imitate sentences in three conditions: (A German, (B English and (C German with fake English accent. We used recently developed novel acoustic analysis, namely the ‘articulation space’ as a metric to compare speech imitation abilities of the two groups. Across all three conditions, direct comparisons between the two groups, revealed brain activations (FWE corrected, p< 0.05 that were more widespread with significantly higher peak activity in the left supramarginal gyrus and postcentral areas for the low ability group. The high ability group, on the other hand showed significantly larger articulation space in all three conditions. In addition, articulation space also correlated positively with imitation ability (Pearson’s r=0.7, p<0.01. Our results suggest that an expanded articulation space for high ability individuals allows access to a larger repertoire of sounds, thereby providing skilled imitators greater flexibility in pronunciation and language learning.

  15. Population diversity in Pacific herring of the Puget Sound, USA.

    Science.gov (United States)

    Siple, Margaret C; Francis, Tessa B

    2016-01-01

    Demographic, functional, or habitat diversity can confer stability on populations via portfolio effects (PEs) that integrate across multiple ecological responses and buffer against environmental impacts. The prevalence of these PEs in aquatic organisms is as yet unknown, and can be difficult to quantify; however, understanding mechanisms that stabilize populations in the face of environmental change is a key concern in ecology. Here, we examine PEs in Pacific herring (Clupea pallasii) in Puget Sound (USA) using a 40-year time series of biomass data for 19 distinct spawning population units collected using two survey types. Multivariate auto-regressive state-space models show independent dynamics among spawning subpopulations, suggesting that variation in herring production is partially driven by local effects at spawning grounds or during the earliest life history stages. This independence at the subpopulation level confers a stabilizing effect on the overall Puget Sound spawning stock, with herring being as much as three times more stable in the face of environmental perturbation than a single population unit of the same size. Herring populations within Puget Sound are highly asynchronous but share a common negative growth rate and may be influenced by the Pacific Decadal Oscillation. The biocomplexity in the herring stock shown here demonstrates that preserving spatial and demographic diversity can increase the stability of this herring population and its availability as a resource for consumers.

  16. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  17. Acoustic source localization : Exploring theory and practice

    NARCIS (Netherlands)

    Wind, Jelmer

    2009-01-01

    Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of

  18. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  19. On non-local energy transfer via zonal flow in the Dimits shift

    International Nuclear Information System (INIS)

    St-Onge, Denis A.

    2017-01-01

    The two-dimensional Terry–Horton equation is shown to exhibit the Dimits shift when suitably modified to capture both the nonlinear enhancement of zonal/drift-wave interactions and the existence of residual Rosenbluth–Hinton states. This phenomenon persists through numerous simplifications of the equation, including a quasilinear approximation as well as a four-mode truncation. It is shown that the use of an appropriate adiabatic electron response, for which the electrons are not affected by the flux-averaged potential, results in an E×B nonlinearity that can efficiently transfer energy non-locally to length scales of the order of the sound radius. The size of the shift for the nonlinear system is heuristically calculated and found to be in excellent agreement with numerical solutions. The existence of the Dimits shift for this system is then understood as an ability of the unstable primary modes to efficiently couple to stable modes at smaller scales, and the shift ends when these stable modes eventually destabilize as the density gradient is increased. This non-local mechanism of energy transfer is argued to be generically important even for more physically complete systems.

  20. On non-local energy transfer via zonal flow in the Dimits shift

    Science.gov (United States)

    St-Onge, Denis A.

    2017-10-01

    The two-dimensional Terry-Horton equation is shown to exhibit the Dimits shift when suitably modified to capture both the nonlinear enhancement of zonal/drift-wave interactions and the existence of residual Rosenbluth-Hinton states. This phenomenon persists through numerous simplifications of the equation, including a quasilinear approximation as well as a four-mode truncation. It is shown that the use of an appropriate adiabatic electron response, for which the electrons are not affected by the flux-averaged potential, results in an nonlinearity that can efficiently transfer energy non-locally to length scales of the order of the sound radius. The size of the shift for the nonlinear system is heuristically calculated and found to be in excellent agreement with numerical solutions. The existence of the Dimits shift for this system is then understood as an ability of the unstable primary modes to efficiently couple to stable modes at smaller scales, and the shift ends when these stable modes eventually destabilize as the density gradient is increased. This non-local mechanism of energy transfer is argued to be generically important even for more physically complete systems.