WorldWideScience

Sample records for human-caused sounds heard

  1. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  2. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  3. Two-and 3-Year-Olds Know What Others Have and Have Not Heard

    Science.gov (United States)

    Moll, Henrike; Carpenter, Malinda; Tomasello, Michael

    2014-01-01

    Recent studies have established that even infants can determine what others know based on previous visual experience. In the current study, we investigated whether 2-and 3-year-olds know what others know based on previous auditory experience. A child and an adult heard the sound of one object together, but only the child heard the sound of another…

  4. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    Science.gov (United States)

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  5. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  6. Predictors of having heard about human papillomavirus vaccination: Critical aspects for cervical cancer prevention among Colombian women.

    Science.gov (United States)

    Bermedo-Carrasco, Silvia; Feng, Cindy Xin; Peña-Sánchez, Juan Nicolás; Lepnurm, Rein

    2015-01-01

    To determine whether the probability of having heard about human papillomavirus (HPV) vaccination differs by socio-demographic characteristics among Colombian women; and whether the effect of predictors of having heard about HPV vaccination varies by educational levels and rural/urban area of residence. Data of 53,521 women aged 13-49 years were drawn from the 2010 Colombian National Demographic and Health Survey. Women were asked about aspects of their health and their socio-demographic characteristics. A logistic regression model was used to identify factors associated with having heard about HPV vaccination. Educational level and rural/urban area of residence of the women were tested as modifier effects of predictors. 26.8% of the women had heard about HPV vaccination. The odds of having heard about HPV vaccination were lower among women: in low wealth quintiles, without health insurance, with subsidized health insurance, and those who had children (p<0.001). Although women in older age groups and with better education had higher probabilities of having heard about HPV vaccination, differences in these probabilities by age group were more evident among educated women compared to non-educated ones. Probability gaps between non-educated and highly educated women were wider in the Eastern region. Living in rural areas decreased the probability of having heard about HPV vaccination, although narrower rural/urban gaps were observed in the Atlantic and Amazon-Orinoquía regions. Almost three quarters of the Colombian women had not heard about HPV vaccination, with variations by socio-demographic characteristics. Women in disadvantaged groups were less likely to have heard about HPV vaccination. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  7. Listening to Birds in the Anthropocene: The Anxious Semiotics of Sound in a Human-Dominated World

    Directory of Open Access Journals (Sweden)

    Whitehouse, Andrew

    2015-05-01

    Full Text Available Ever since Rachel Carson predicted a “silent spring” environmentalists have been carefully and anxiously listening to birds. More recently the musician and scientist Bernie Krause has examined the effects of human activity on avian soundscapes throughout the world. He argues that human activities cause ecological and sonic disruptions that really are rendering the world silent or discordant, submerging the “animal orchestra” beneath noise. A healthy natural environment can be heard, according to Krause, in a rich and harmonious soundscape that has evolved over millions of years. The loss of wildness thus elicits a loss of harmony. I consider these Anthropocene interpretations of silence, noise and dissonance by comparing the environmentalist concerns of Krause with responses to the Listening to Birds project—an anthropological investigation of bird sounds. These responses emphasise the significance of bird sounds for people’s sense of place, time and season and the longing that many have for their own lives to resonate with the birds around them. I argue that this has less to do with desires to hear harmony in pristine nature but with developing relations of companionship with birds living alongside humans. While listening to birds can still iconically and indexically ground people, signs of absence and change can precipitate anxieties that stem from the ambiguities implicit in the Anthropocene’s formulation of human relations with other species. Using narratives and field recordings I explore the anxious semiotics of listening to birds in the Anthropocene by drawing on Kohn’s recent arguments on the semiotics of more-than-human relations and Ingold’s understanding of the world as a meshwork.

  8. Understanding and managing experiential aspects of soundscapes at Muir woods national monument.

    Science.gov (United States)

    Pilcher, Ericka J; Newman, Peter; Manning, Robert E

    2009-03-01

    Research has found that human-caused noise can detract from the quality of the visitor experience in national parks and related areas. Moreover, impacts to the visitor experience can be managed by formulating indicators and standards of quality as suggested in park and outdoor recreation management frameworks, such as Visitor Experience and Resource Protection (VERP), as developed by the U.S. National Park Service. The research reported in this article supports the formulation of indicators and standards of quality for human-caused noise at Muir Woods National Monument, California. Phase I identified potential indicators of quality for the soundscape of Muir Woods. A visitor "listening exercise" was conducted, where respondents identified natural and human-caused sounds heard in the park and rated the degree to which each sound was "pleasing" or "annoying." Certain visitor-caused sounds such as groups talking were heard by most respondents and were rated as annoying, suggesting that these sounds may be a good indicator of quality. Loud groups were heard by few people but were rated as highly annoying, whereas wind and water were heard by most visitors and were rated as highly pleasing. Phase II measured standards of quality for visitor-caused noise. Visitors were presented with a series of 30-second audio clips representing increasing amounts of visitor-caused sound in the park. Respondents were asked to rate the acceptability of each audio clip on a survey. Findings suggest a threshold at which visitor-caused sound is judged to be unacceptable, and is therefore considered as noise. A parallel program of sound monitoring in the park found that current levels of visitor-caused sound sometimes violate this threshold. Study findings provide an empirical basis to help formulate noise-related indicators and standards of quality in parks and related areas.

  9. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  10. Hearing sounds, understanding actions: action representation in mirror neurons.

    Science.gov (United States)

    Kohler, Evelyne; Keysers, Christian; Umiltà, M Alessandra; Fogassi, Leonardo; Gallese, Vittorio; Rizzolatti, Giacomo

    2002-08-02

    Many object-related actions can be recognized by their sound. We found neurons in monkey premotor cortex that discharge when the animal performs a specific action and when it hears the related sound. Most of the neurons also discharge when the monkey observes the same action. These audiovisual mirror neurons code actions independently of whether these actions are performed, heard, or seen. This discovery in the monkey homolog of Broca's area might shed light on the origin of language: audiovisual mirror neurons code abstract contents-the meaning of actions-and have the auditory access typical of human language to these contents.

  11. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  12. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Deformation of a sound field caused by a manikin

    DEFF Research Database (Denmark)

    Weinrich, Søren G.

    1981-01-01

    around the head at distances of 1 cm to 2 m, measured from the tip of the nose. The signals were pure tones at 1, 2, 4, 6, 8, and 10 kHz. It was found that the presence of the manikin caused changes in the SPL of the sound field of at most ±2.5 dB at a distance of 1 m from the surface of the manikin....... Only over an interval of approximately 20 ° behind the manikin (i.e., opposite the sound source) did the manikin cause much larger changes, up to 9 dB. These changes are caused by destructive interference between sounds coming from opposite sides of the manikin. In front of the manikin, the changes...

  14. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  15. Cardiovascular Sound and the Stethoscope, 1816 to 2016

    Science.gov (United States)

    Segall, Harold N.

    1963-01-01

    Cardiovascular sound escaped attention until Laennec invented and demonstrated the usefulness of the stethoscope. Accuracy of diagnosis using cardiovascular sounds as clues increased with improvement in knowledge of the physiology of circulation. Nearly all currently acceptable clinicopathological correlations were established by physicians who used the simplest of stethoscopes or listened with the bare ear. Certain refinements followed the use of modern methods which afford greater precision in timing cardiovascular sounds. These methods contribute to educating the human ear, so that those advantages may be applied which accrue from auscultation, plus the method of writing quantitative symbols to describe what is heard, by focusing the sense of hearing on each segment of the cardiac cycle in turn. By the year 2016, electronic systems of collecting and analyzing data about the cardiovascular system may render the stethoscope obsolete. ImagesFig. 1Fig. 2Fig. 3Fig. 5Fig. 8 PMID:13987676

  16. Hearing illusory sounds in noise: sensory-perceptual transformations in primary auditory cortex.

    NARCIS (Netherlands)

    Riecke, L.; Opstal, A.J. van; Goebel, R.; Formisano, E.

    2007-01-01

    A sound that is interrupted by silence is perceived as discontinuous. However, when the silence is replaced by noise, the target sound may be heard as uninterrupted. Understanding the neural basis of this continuity illusion may elucidate the ability to track sounds of interest in noisy auditory

  17. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  18. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  19. Half pitch lower sound perception caused by carbamazepine.

    Science.gov (United States)

    Konno, Shyu; Yamazaki, Etsuko; Kudoh, Masako; Abe, Takashi; Tohgi, Hideo

    2003-09-01

    We report a 16-year-old woman with secondary generalization of partial seizure, who complained of an auditory disturbance after carbamazepine (CBZ) administration. She had been taking sodium valproate (VPA) from the age of 15. However, her seizures remained poorly controlled. We changed her antiepileptic drug from VPA to CBZ. At 1 week after CBZ administration, she noticed that electone musical performances were heard as a semitone lower. When oral administration of CBZ was stopped, her pitch perception returned to normal. If she had not been able to discern absolute pitch, she might have been unable to recognize her lowered pitch perception. Auditory disturbance caused by CBZ is reversible and very rare.

  20. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  1. Annoyance caused by the sounds of a magnetic levitation train

    NARCIS (Netherlands)

    Vos, J.

    2004-01-01

    In a laboratory study, the annoyance caused by the passby sounds from a magnetic levitation (maglev) train was investigated. The listeners were presented with various sound fragments. The task of the listeners was to respond after each presentation to the question: "How annoying would you find the

  2. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  3. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  4. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  5. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  6. Have You Heard of GERD? (For Kids)

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body Works ... Español Have You Heard of GERD? KidsHealth / For Kids / Have You Heard of GERD? What's in this ...

  7. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  8. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  9. Low-frequency sound affects active micromechanics in the human inner ear

    Science.gov (United States)

    Kugler, Kathrin; Wiegrebe, Lutz; Grothe, Benedikt; Kössl, Manfred; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2014-01-01

    Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing. PMID:26064536

  10. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  11. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  12. Integrated Human Factors Design Guidelines for Sound Interface

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Lee, Yong Hee; Oh, In Seok; Lee, Hyun Chul; Cha, Woo Chang

    2004-05-01

    Digital MMI, such as CRT, LCD etc., has been used increasingly in the design of main control room of the Korean standard nuclear power plants following the YGN units 3 and 4. The utilization of digital MMI may introduce various kind of sound interface into the control room design. In this project, for five top-level guideline items, including Sound Formats, Alarms, Sound Controls, Communications, and Environments, a total of 147 detail guidelines were developed and a database system for these guidelines was developed. The integrated human factors design guidelines for sound interface and the database system developed in this project will be useful for the design of sound interface of digital MMI in Korean NPPs

  13. Integrated Human Factors Design Guidelines for Sound Interface

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Lee, Yong Hee; Oh, In Seok; Lee, Hyun Chul [KAERI, Daejeon (Korea, Republic of); Cha, Woo Chang [Kumoh National Univ. of Technology, Gumi (Korea, Republic of)

    2004-05-15

    Digital MMI, such as CRT, LCD etc., has been used increasingly in the design of main control room of the Korean standard nuclear power plants following the YGN units 3 and 4. The utilization of digital MMI may introduce various kind of sound interface into the control room design. In this project, for five top-level guideline items, including Sound Formats, Alarms, Sound Controls, Communications, and Environments, a total of 147 detail guidelines were developed and a database system for these guidelines was developed. The integrated human factors design guidelines for sound interface and the database system developed in this project will be useful for the design of sound interface of digital MMI in Korean NPPs.

  14. Hearing Mouth Shapes: Sound Symbolism and the Reverse McGurk Effect

    Directory of Open Access Journals (Sweden)

    Charles Spence

    2012-09-01

    Full Text Available In their recent article, Sweeny, Guzman-Martinez, Ortega, Grabowecky, and Suzuki (2012 demonstrate that heard speech sounds modulate the perceived shape of briefly presented visual stimuli. Ovals, whose aspect ratio (relating width to height varied on a trial-by-trial basis, were rated as looking wider when a /woo/ sound was presented, and as taller when a /wee/ sound was presented instead. On the one hand, these findings add to a growing body of evidence demonstrating that audiovisual correspondences can have perceptual (as well as decisional effects. On the other hand, they prompt a question concerning their origin. Although the currently popular view is that crossmodal correspondences are based on the internalization of the natural multisensory statistics of the environment (see Spence, 2011, these new results suggest instead that certain correspondences may actually be based on the sensorimotor responses associated with human vocalizations. As such, the findings of Sweeny et al. help to breathe new life into Sapir's (1929 once-popular “embodied” explanation of sound symbolism. Furthermore, they pose a challenge for those psychologists wanting to determine which among a number of plausible accounts best explains the available data on crossmodal correspondences.

  15. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  16. ``Hiss, clicks and pops'' - The enigmatic sounds of meteors

    Science.gov (United States)

    Finnegan, J. A.

    2015-04-01

    The improbability of sounds heard simultaneously with meteors allows the phenomenon to remain on the margins of scientific interest and research. This is unjustified, since these audibly perceived electric field effects indicate complex, inconsistent and still unresolved electric-magnetic coupling and charge dynamics; interacting between the meteor; the ionosphere and mesosphere; stratosphere; troposphere and the surface of the earth. This paper reviews meteor acoustic effects, presents illustrating reports and hypotheses and includes a summary of similar and additional phenomena observed during the 2013 February 15 asteroid fragment disintegration above the Russian district of Chelyabinsk. An augmenting theory involving near ground, non uniform electric field production of Ozone, as a stimulated geo-physical phenomenon to explain some hissing `meteor sounds' is suggested in section 2.2. Unlike previous theories, electric-magnetic field fluctuation rates are not required to occur in the audio frequency range for this process to acoustically emit hissing and intermittent impulsive sounds; removing the requirements of direct conversion, passive human transduction or excited, localised acoustic `emitters'. Links to the Armagh Observatory All-sky meteor cameras, electrophonic meteor research and full construction plans for an extremely low frequency (ELF) receiver are also included.

  17. Reading drift in flow rate sensors caused by steady sound waves

    International Nuclear Information System (INIS)

    Maximiano, Celso; Nieble, Marcio D.; Migliavacca, Sylvana C.P.; Silva, Eduardo R.F.

    1995-01-01

    The use of thermal sensors very common for the measurement of small flows of gases. In this kind of sensor a little tube forming a bypass is heated symmetrically, then the temperature distribution in the tube modifies with the mass flow along it. When a stationary wave appears in the principal tube it causes an oscillation of pressure around the average value. The sensor, located between two points of the principal tube, indicates not only the principal mass flow, but also that one caused by the difference of pressure induced by the sound wave. When the gas flows at low pressures the equipment indicates a value that do not correspond to the real. Tests and essays were realized by generating a sound wave in the principal tube, without mass flow, and the sensor detected flux. In order to solve this problem a wave-damper was constructed, installed and tested in the system and it worked satisfactory eliminating with efficiency the sound wave. (author). 2 refs., 3 figs

  18. A- and C-weighted sound levels as predictors of the annoyance caused by shooting sounds, for various facade attenuation types

    NARCIS (Netherlands)

    Vos, J.

    2003-01-01

    In a previous study on the annoyance caused by a great variety of shooting sounds [J. Acoust. Soc. Am. 109, 244-253 (2001)], it was shown that the annoyance, as rated indoors with the windows closed, could be adequately predicted from the outdoor A-weighted and C-weighted sound-exposure levels [ASEL

  19. Investigating emotional contagion in dogs (Canis familiaris) to emotional sounds of humans and conspecifics.

    Science.gov (United States)

    Huber, Annika; Barber, Anjuli L A; Faragó, Tamás; Müller, Corsin A; Huber, Ludwig

    2017-07-01

    Emotional contagion, a basic component of empathy defined as emotional state-matching between individuals, has previously been shown in dogs even upon solely hearing negative emotional sounds of humans or conspecifics. The current investigation further sheds light on this phenomenon by directly contrasting emotional sounds of both species (humans and dogs) as well as opposed valences (positive and negative) to gain insights into intra- and interspecies empathy as well as differences between positively and negatively valenced sounds. Different types of sounds were played back to measure the influence of three dimensions on the dogs' behavioural response. We found that dogs behaved differently after hearing non-emotional sounds of their environment compared to emotional sounds of humans and conspecifics ("Emotionality" dimension), but the subjects responded similarly to human and conspecific sounds ("Species" dimension). However, dogs expressed more freezing behaviour after conspecific sounds, independent of the valence. Comparing positively with negatively valenced sounds of both species ("Valence" dimension), we found that, independent of the species from which the sound originated, dogs expressed more behavioural indicators for arousal and negatively valenced states after hearing negative emotional sounds. This response pattern indicates emotional state-matching or emotional contagion for negative sounds of humans and conspecifics. It furthermore indicates that dogs recognized the different valences of the emotional sounds, which is a promising finding for future studies on empathy for positive emotional states in dogs.

  20. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  1. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  2. Beliefs in the population about cracking sounds produced during spinal manipulation.

    Science.gov (United States)

    Demoulin, Christophe; Baeri, Damien; Toussaint, Geoffrey; Cagnie, Barbara; Beernaert, Axel; Kaux, Jean-François; Vanderthommen, Marc

    2018-03-01

    To examine beliefs about cracking sounds heard during high-velocity low-amplitude (HVLA) thrust spinal manipulation in individuals with and without personal experience of this technique. We included 100 individuals. Among them, 60 had no history of spinal manipulation, including 40 who were asymptomatic with or without a past history of spinal pain and 20 who had nonspecific spinal pain. The remaining 40 patients had a history of spinal manipulation; among them, 20 were asymptomatic and 20 had spinal pain. Participants attended a one-on-one interview during which they completed a questionnaire about their history of spinal manipulation and their beliefs regarding sounds heard during spinal manipulation. Mean age was 43.5±15.4years. The sounds were ascribed to vertebral repositioning by 49% of participants and to friction between two vertebras by 23% of participants; only 9% of participants correctly ascribed the sound to the formation of a gas bubble in the joint. The sound was mistakenly considered to indicate successful spinal manipulation by 40% of participants. No differences in beliefs were found between the groups with and without a history of spinal manipulation. Certain beliefs have documented adverse effects. This study showed a high prevalence of unfounded beliefs regarding spinal manipulation. These beliefs deserve greater attention from healthcare providers, particularly those who practice spinal manipulation. Copyright © 2017 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  3. Generation of ultra-sound during tape peeling

    KAUST Repository

    Marston, Jeremy O.

    2014-03-21

    We investigate the generation of the screeching sound commonly heard during tape peeling using synchronised high-speed video and audio acquisition. We determine the peak frequencies in the audio spectrum and, in addition to a peak frequency at the upper end of the audible range (around 20 kHz), we find an unexpected strong sound with a high-frequency far above the audible range, typically around 50 kHz. Using the corresponding video data, the origins of the key frequencies are confirmed as being due to the substructure "fracture" bands, which we herein observe in both high-speed continuous peeling motions and in the slip phases for stick-slip peeling motions.

  4. Generation of ultra-sound during tape peeling

    KAUST Repository

    Marston, Jeremy O.; Riker, Paul W.; Thoroddsen, Sigurdur T

    2014-01-01

    We investigate the generation of the screeching sound commonly heard during tape peeling using synchronised high-speed video and audio acquisition. We determine the peak frequencies in the audio spectrum and, in addition to a peak frequency at the upper end of the audible range (around 20 kHz), we find an unexpected strong sound with a high-frequency far above the audible range, typically around 50 kHz. Using the corresponding video data, the origins of the key frequencies are confirmed as being due to the substructure "fracture" bands, which we herein observe in both high-speed continuous peeling motions and in the slip phases for stick-slip peeling motions.

  5. Sleep disturbance caused by meaningful sounds and the effect of background noise

    Science.gov (United States)

    Namba, Seiichiro; Kuwano, Sonoko; Okamoto, Takehisa

    2004-10-01

    To study noise-induced sleep disturbance, a new procedure called "noise interrupted method"has been developed. The experiment is conducted in the bedroom of the house of each subject. The sounds are reproduced with a mini-disk player which has an automatic reverse function. If the sound is disturbing and subjects cannot sleep, they are allowed to switch off the sound 1 h after they start to try to sleep. This switch off (noise interrupted behavior) is an important index of sleep disturbance. Next morning they fill in a questionnaire in which quality of sleep, disturbance of sounds, the time when they switched off the sound, etc. are asked. The results showed a good relationship between L and the percentages of the subjects who could not sleep in an hour and between L and the disturbance reported in the questionnaire. This suggests that this method is a useful tool to measure the sleep disturbance caused by noise under well-controlled conditions.

  6. Sound of a cup with and without instant coffee

    Science.gov (United States)

    Morrison, Andrew; Rossing, Thomas D.

    2002-05-01

    An empty coffee cup, like an ancient Chinese two-tone bell, emits two distinctly different tones, depending upon where it is tapped. When it is filled with hot water, and some instant coffee is added, however, a whole new set of sounds is heard when the cup is tapped. The pitch rises an octave or more as the foam clears due to the dramatic change in the speed of sound in the bubble-filled liquid. A similar, but smaller, effect was noted in beer by Bragg [The World of Sound (1968)] and in hot chocolate by Crawford [Am. J. Phys. (1982)]. We describe the modes of vibration in a coffee cup and the sound emitted by a coffee cup as filled with instant coffee as the bubble density changes.

  7. Temporal coherence for pure tones in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens).

    Science.gov (United States)

    Neilans, Erikson G; Dent, Micheal L

    2015-02-01

    Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  8. Songbirds and humans apply different strategies in a sound sequence discrimination task

    Directory of Open Access Journals (Sweden)

    Yoshimasa eSeki

    2013-07-01

    Full Text Available The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an AAB or ABB rule. The sound elements used were taken from a variety of male (M and female (F calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: 1 memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and 2 using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e. AAB and ABB; MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  9. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    Science.gov (United States)

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  10. Wind Turbine Infra and Low-Frequency Sound: Warning Signs that Were Not Heard

    Science.gov (United States)

    James, Richard R.

    2012-01-01

    Industrial wind turbines are frequently thought of as benign. However, the literature is reporting adverse health effects associated with the implementation of industrial-scale wind developments. This article explores the historical evidence about what was known regarding infra and low-frequency sound from wind turbines and other noise sources…

  11. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Mechanisms underlying speech sound discrimination and categorization in humans and zebra finches

    NARCIS (Netherlands)

    Burgering, Merel A.; ten Cate, Carel; Vroomen, Jean

    Speech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and humans to discriminate two pairs of edited speech sounds that varied either along one dimension (vowel

  13. Determination of the mechanical thermostat electrical contacts switching quality with sound and vibration analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rejc, Jure; Munih, Marko [University of Ljubljana, Ljubljana (Slovenia)

    2017-05-15

    A mechanical thermostat is a device that switches heating or cooling appliances on or off based on temperature. For this kind of use, electronic or mechanical switching concepts are applied. During the production of electrical contacts, several irregularities can occur leading to improper switching events of the thermostat electrical contacts. This paper presents a non-obstructive method based on the fact that when the switching event occurs it can be heard and felt by human senses. We performed several laboratory tests with two different methods. The first method includes thermostat switch sound signal analysis during the switching event. The second method is based on sampling of the accelerometer signal during the switching event. The results show that the sound analysis approach has great potential. The approach enables an accurate determination of the switching event even if the sampled signal carries also the switching event of the neighbour thermostat.

  14. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  15. Human-inspired sound environment recognition system for assistive vehicles

    Science.gov (United States)

    González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando

    2015-02-01

    Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance

  16. SoundScapes - Beyond Interaction... in search of the ultimate human-centred interface

    DEFF Research Database (Denmark)

    Brooks, Tony

    2006-01-01

    that can also benefit communication. To achieve this a new generation of intuitive natural interfaces will be required and SoundScapes (see below) is a step toward this goal to discover the ultimate interface for matching the human experience to technology. Emergent hypothesis that have developed...... as a result of the SoundScapes research will be discussed. Introduction to SoundScapes SoundScapes is a contemporary art concept that has become widely known as an interdisciplinary platform for knowledge exchange, innovative product creation of creative and scientific work that uses non-invasive sensor...... Resonance. The multimedia content is adaptable so that the environment is tailored for each participant according to a user profile. This full body movement or the smallest of gesture results in human data input to SoundScapes. The same technology that enables this empowerment is used for performance art...

  17. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  18. Transformer sound level caused by core magnetostriction and winding stress displacement variation

    Directory of Open Access Journals (Sweden)

    Chang-Hung Hsu

    2017-05-01

    Full Text Available Magnetostriction caused by the exciting variation of the magnetic core and the current conducted by the winding wired to the core has a significant result impact on a power transformer. This paper presents the sound of a factory transformer before on-site delivery for no-load tests. This paper also discusses the winding characteristics from the transformer full-load tests. The simulation and the measurement for several transformers with capacities ranging from 15 to 60 MVA and high voltage 132kV to low voltage 33 kV are performed. This study compares the sound levels for transformers by no-load test (core/magnetostriction and full-load test (winding/displacement ε. The difference between the simulated and the measured sound levels is about 3dB. The results show that the sound level depends on several parameters, including winding displacement, capacity, mass of the core and windings. Comparative results of magnetic induction of cores and the electromagnetic force of windings for no-load and full-load conditions are examined.

  19. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  20. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization.

    Science.gov (United States)

    Billig, Alexander J; Davis, Matthew H; Carlyon, Robert P

    2018-03-14

    Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while

  2. Human and animal sounds influence recognition of body language.

    Science.gov (United States)

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  3. Evidence of sound production by spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain

    Science.gov (United States)

    Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.

    2018-01-01

    Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.

  4. Effect which environmental sound causes for memory retrieval

    OpenAIRE

    武良,徹文

    1999-01-01

    The research judges the relation and no relation between the stimulation at prime and the target stimulation with hearing of pleasantness and an unpleasant sound. It is a purpose what influence for you give to examine a reaction time and a miss-rate of responding as an index when the problem is accomplished. The subjects are 50 university students to the graduate school students from one year. The subject was distributed to 21 pleasant sound condition groups, 10 unpleasant sound condition gro...

  5. Differential Intracochlear Sound Pressure Measurements in Normal Human Temporal Bones

    Science.gov (United States)

    Nakajima, Hideko Heidi; Dong, Wei; Olson, Elizabeth S.; Merchant, Saumil N.; Ravicz, Michael E.; Rosowski, John J.

    2009-02-01

    We present the first simultaneous sound pressure measurements in scala vestibuli and scala tympani of the cochlea in human cadaveric temporal bones. Micro-scale fiberoptic pressure sensors enabled the study of differential sound pressure at the cochlear base. This differential pressure is the input to the cochlear partition, driving cochlear waves and auditory transduction. Results showed that: pressure of scala vestibuli was much greater than scala tympani except at low and high frequencies where scala tympani pressure affects the input to the cochlea; the differential pressure proved to be an excellent measure of normal ossicular transduction of sound (shown to decrease 30-50 dB with ossicular disarticulation, whereas the individual scala pressures were significantly affected by non-ossicular conduction of sound at high frequencies); the middle-ear gain and differential pressure were generally bandpass in frequency dependence; and the middle-ear delay in the human was over twice that of the gerbil. Concurrent stapes velocity measurements allowed determination of the differential impedance across the partition and round-window impedance. The differential impedance was generally resistive, while the round-window impedance was consistent with a compliance in conjunction with distributed inertia and damping. Our techniques can be used to study inner-ear conductive pathologies (e.g., semicircular dehiscence), as well as non-ossicular cochlear stimulation (e.g., round-window stimulation) - situations that cannot be completely quantified by measurements of stapes velocity or scala-vestibuli pressure by themselves.

  6. Effects of Natural Sounds on Pain: A Randomized Controlled Trial with Patients Receiving Mechanical Ventilation Support.

    Science.gov (United States)

    Saadatmand, Vahid; Rejeh, Nahid; Heravi-Karimooi, Majideh; Tadrisi, Sayed Davood; Vaismoradi, Mojtaba; Jordan, Sue

    2015-08-01

    Nonpharmacologic pain management in patients receiving mechanical ventilation support in critical care units is under investigated. Natural sounds may help reduce the potentially harmful effects of anxiety and pain in hospitalized patients. The aim of this study was to examine the effect of pleasant, natural sounds on self-reported pain in patients receiving mechanical ventilation support, using a pragmatic parallel-arm, randomized controlled trial. The study was conducted in a general adult intensive care unit of a high-turnover teaching hospital, in Tehran, Iran. Between October 2011 and June 2012, we recruited 60 patients receiving mechanical ventilation support to the intervention (n = 30) and control arms (n = 30) of a pragmatic parallel-group, randomized controlled trial. Participants in both arms wore headphones for 90 minutes. Those in the intervention arm heard pleasant, natural sounds, whereas those in the control arm heard nothing. Outcome measures included the self-reported visual analog scale for pain at baseline; 30, 60, and 90 minutes into the intervention; and 30 minutes post-intervention. All patients approached agreed to participate. The trial arms were similar at baseline. Pain scores in the intervention arm fell and were significantly lower than in the control arm at each time point (p natural sounds via headphones is a simple, safe, nonpharmacologic nursing intervention that may be used to allay pain for up to 120 minutes in patients receiving mechanical ventilation support. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  7. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    Science.gov (United States)

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  8. Sound pressure distribution within natural and artificial human ear canals: forward stimulation.

    Science.gov (United States)

    Ravicz, Michael E; Tao Cheng, Jeffrey; Rosowski, John J

    2014-12-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5-2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11-16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC.

  9. Responses of the ear to low frequency sounds, infrasound and wind turbines.

    Science.gov (United States)

    Salt, Alec N; Hullar, Timothy E

    2010-09-01

    Infrasonic sounds are generated internally in the body (by respiration, heartbeat, coughing, etc) and by external sources, such as air conditioning systems, inside vehicles, some industrial processes and, now becoming increasingly prevalent, wind turbines. It is widely assumed that infrasound presented at an amplitude below what is audible has no influence on the ear. In this review, we consider possible ways that low frequency sounds, at levels that may or may not be heard, could influence the function of the ear. The inner ear has elaborate mechanisms to attenuate low frequency sound components before they are transmitted to the brain. The auditory portion of the ear, the cochlea, has two types of sensory cells, inner hair cells (IHC) and outer hair cells (OHC), of which the IHC are coupled to the afferent fibers that transmit "hearing" to the brain. The sensory stereocilia ("hairs") on the IHC are "fluid coupled" to mechanical stimuli, so their responses depend on stimulus velocity and their sensitivity decreases as sound frequency is lowered. In contrast, the OHC are directly coupled to mechanical stimuli, so their input remains greater than for IHC at low frequencies. At very low frequencies the OHC are stimulated by sounds at levels below those that are heard. Although the hair cells in other sensory structures such as the saccule may be tuned to infrasonic frequencies, auditory stimulus coupling to these structures is inefficient so that they are unlikely to be influenced by airborne infrasound. Structures that are involved in endolymph volume regulation are also known to be influenced by infrasound, but their sensitivity is also thought to be low. There are, however, abnormal states in which the ear becomes hypersensitive to infrasound. In most cases, the inner ear's responses to infrasound can be considered normal, but they could be associated with unfamiliar sensations or subtle changes in physiology. This raises the possibility that exposure to the

  10. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  11. Memory for environmental sounds in sighted, congenitally blind and late blind adults: evidence for cross-modal compensation.

    Science.gov (United States)

    Röder, Brigitte; Rösler, Frank

    2003-10-01

    Several recent reports suggest compensatory performance changes in blind individuals. It has, however, been argued that the lack of visual input leads to impoverished semantic networks resulting in the use of data-driven rather than conceptual encoding strategies on memory tasks. To test this hypothesis, congenitally blind and sighted participants encoded environmental sounds either physically or semantically. In the recognition phase, both conceptually as well as physically distinct and physically distinct but conceptually highly related lures were intermixed with the environmental sounds encountered during study. Participants indicated whether or not they had heard a sound in the study phase. Congenitally blind adults showed elevated memory both after physical and semantic encoding. After physical encoding blind participants had lower false memory rates than sighted participants, whereas the false memory rates of sighted and blind participants did not differ after semantic encoding. In order to address the question if compensatory changes in memory skills are restricted to critical periods during early childhood, late blind adults were tested with the same paradigm. When matched for age, they showed similarly high memory scores as the congenitally blind. These results demonstrate compensatory performance changes in long-term memory functions due to the loss of a sensory system and provide evidence for high adaptive capabilities of the human cognitive system.

  12. Speaking up, being heard: registered nurses' perceptions of workplace communication.

    Science.gov (United States)

    Garon, Maryanne

    2012-04-01

    The aim of the present study was to explore nurses' perceptions of their own ability to speak up and be heard in the workplace. Nurses are central to patient care and patient safety in hospitals. Their ability to speak up and be heard greatly impacts their own work satisfaction, team work as well as patient safety. The present study utilized a qualitative approach, consisting of focus group interviews of 33 registered nurses (RNs), in staff or management positions from a variety of healthcare settings in California, USA. Data were analysed using thematic content analysis. Findings were organized into three categories: influences on speaking up, transmission and reception of a message and outcomes or results. The present study supported the importance of the manager in setting the culture of open communication. It is anticipated that findings from the present study may increase understandings of nurse views of communication within healthcare settings. The study highlights the importance of nurse managers in creating the communication culture that will allow nurses to speak up and be heard. These open communication cultures lead to better patient care, increased safety and better staff satisfaction. © 2011 Blackwell Publishing Ltd.

  13. Comparison of sound transmission in human ears and coupler loaded by audiometric earphones

    DEFF Research Database (Denmark)

    Ciric, Dejan; Hammershøi, Dorte

    2005-01-01

    in the coupler, but since the "ear canal entrance" is not well-defined for the coupler, the mentioned measurements were done at different depths in the coupler. The sound transmission and coupling were described in terms of the pressure division at the entrance of the ear canal and the transmissions in human......, the differences among earphones as well as between human ears and the coupler affect the results of audiometric measurements inducing uncertainty. The influence of these differences is examined by investigating the sound transmission in both human ears and standardized coupler loaded by different audiometric......The thresholds of hearing are usually determined using audiometric earphones. They are calibrated by means of a standardized acoustical coupler. In order to have determined thresholds independent of the earphone type, the coupler should approximate the average human ear closely. Nevertheless...

  14. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    The French Agency for Food, Environmental and Occupational Health and Safety (ANSES) reiterates that wind turbines emit infra-sounds (sound below 20 Hz) and low-frequency sounds. There are also other sources of infra-sound emissions that can be natural (wind in particular) or anthropogenic (heavy-goods vehicles, heat pumps, etc.). The measurement campaigns undertaken during the expert appraisal enabled these emissions from three wind farms to be characterised. In general, only very high intensities of infra-sound can be heard or perceived by humans. At the minimum distance (of 500 metres) separating homes from wind farm sites set out by the regulations, the infra-sounds produced by wind turbines do not exceed hearing thresholds. Therefore, the disturbance related to audible noise potentially felt by people around wind farms mainly relates to frequencies above 50 Hz. The expert appraisal showed that mechanisms for health effects grouped under the term 'vibro-acoustic disease', reported in certain publications, have no serious scientific basis. There have been very few scientific studies on the potential health effects of infra-sounds and low frequencies produced by wind turbines. The review of these experimental and epidemiological data did not find any adequate scientific arguments for the occurrence of health effects related to exposure to noise from wind turbines, other than disturbance related to audible noise and a nocebo effect, which can help explain the occurrence of stress-related symptoms experienced by residents living near wind farms. However, recently acquired knowledge on the physiology of the cochlea-vestibular system has revealed physiological effects in animals induced by exposure to high-intensity infra-sounds. These effects, while plausible in humans, have yet to be demonstrated for exposure to levels comparable to those observed in residents living near wind farms. Moreover, the connection between these physiological effects and the occurrence of

  15. Tinnitus (Phantom Sound: Risk coming for future

    Directory of Open Access Journals (Sweden)

    Suresh Rewar

    2015-01-01

    Full Text Available The word 'tinnitus' comes from the Latin word tinnire, meaning “to ring” or “a ringing.” Tinnitus is the cognition of sound in the absence of any corresponding external sound. Tinnitus can take the form of continuous buzzing, hissing, or ringing, or a combination of these or other characteristics. Tinnitus affects 10% to 25% of the adult population. Tinnitus is classified as objective and subjective categories. Subjective tinnitus is meaningless sounds that are not associated with a physical sound and only the person who has the tinnitus can hear it. Objective tinnitus is the result of a sound that can be heard by the physician. Tinnitus is not a disease in itself but a common symptom, and because it involves the perception of sound or sounds, it is commonly associated with the hearing system. In fact, various parts of the hearing system, including the inner ear, are often responsible for this symptom. Tinnitus patients, which can lead to sleep disturbances, concentration problems, fatigue, depression, anxiety disorders, and sometimes even to suicide. The evaluation of tinnitus always begins with a thorough history and physical examination, with further testing performed when indicated. Diagnostic testing should include audiography, speech discrimination testing, computed tomography angiography, or magnetic resonance angiography should be performed. All patients with tinnitus can benefit from patient education and preventive measures, and oftentimes the physician's reassurance and assistance with the psychologic aftereffects of tinnitus can be the therapy most valuable to the patient. There are no specific medications for the treatment of tinnitus. Sedatives and some other medications may prove helpful in the early stages. The ultimate goal of neuro-imaging is to identify subtypes of tinnitus in order to better inform treatment strategies.

  16. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  17. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  18. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  19. Voices of minor children heard and unheard in judicial divorce proceedings in the Netherlands

    NARCIS (Netherlands)

    Coenraad, L.M.

    2014-01-01

    Under Dutch divorce law, children in theory have ample opportunity to make their voices heard: the petition for divorce must state how the children have been involved in preparing a parenting plan; all children aged 12 or 16 (depending on the context) or older have the right to be heard by the

  20. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  1. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  2. Hydrothermal venting on the flanks of Heard and McDonald islands, southern Indian Ocean

    Science.gov (United States)

    Lupton, J. E.; Arculus, R. J.; Coffin, M.; Bradney, A.; Baumberger, T.; Wilkinson, C.

    2017-12-01

    Heard Island and the nearby McDonald Islands are two sites of active volcanism associated with the so-called Kerguelen mantle plume or hot spot. In fact, it has been proposed that the Kerguelen hot spot is currently located beneath Heard Island. During its maiden maximum endurance voyage (IN2016_V01), the recently commissioned Australian R/V Investigator conducted a detailed bathymetric and water column survey of the waters around Heard Island and the McDonald Islands as well as other sites on the Kerguelen Plateau. Some 50 hydrographic profiles were completed using the CTD/rosette system equipped with trace metal sampling and a nephelometer for suspended particle concentrations. In addition to the hydrographic profiles, 244 bubble plumes were detected in the vicinity of the Heard and McDonald Islands using the ship's multibeam system. It is thought that the bubble plumes observed on sea knolls and other seafloor surrounding the McDonald Islands are likely hydrothermal in origin, while plumes northeast of Heard Island may be biogenic methane from cold seeps. At 29 of the hydrographic stations water samples for helium isotope measurements were drawn from the CTD rosette and hermetically sealed into copper tubing for subsequent shorebased mass spectrometer and gas chromatograph analysis. In this paper we report results for 3He/4He ratios and CO2 and CH4 concentrations in water samples collected with the CTD/rosette, and discuss how these results are correlated with suspended particle concentrations and temperature anomalies.

  3. Introducing the Oxford Vocal (OxVoc Sounds Database: A validated set of non-acted affective sounds from human infants, adults and domestic animals

    Directory of Open Access Journals (Sweden)

    Christine eParsons

    2014-06-01

    Full Text Available Sound moves us. Nowhere is this more apparent than in our responses to genuine emotional vocalisations, be they heartfelt distress cries or raucous laughter. Here, we present perceptual ratings and a description of a freely available, large database of natural affective vocal sounds from human infants, adults and domestic animals, the Oxford Vocal (OxVoc Sounds database. This database consists of 173 non-verbal sounds expressing a range of happy, sad and neutral emotional states. Ratings are presented for the sounds on a range of dimensions from a number of independent participant samples. Perceptions related to valence, including distress, vocaliser mood, and listener mood are presented in Study 1. Perceptions of the arousal of the sound, listener motivation to respond and valence (positive, negative are presented in Study 2. Perceptions of the emotional content of the stimuli in both Study 1 and Study 2 were consistent with the predefined categories (e.g., laugh stimuli perceived as positive. While the adult vocalisations received more extreme valence ratings, rated motivation to respond to the sounds was highest for the infant sounds. The major advantages of this database are the inclusion of vocalisations from naturalistic situations, which represent genuine expressions of emotion, and the inclusion of vocalisations from animals and infants, providing comparison stimuli for use in cross-species and developmental studies. The associated website provides a detailed description of the physical properties of the each sound stimulus along with cross-category descriptions.

  4. The Power of Talk: Who Gets Heard and Why.

    Science.gov (United States)

    Tannen, Deborah

    1995-01-01

    Conversational style often overrides what is said, affecting who gets heard and what gets done. Women's linguistic styles often make them seem less competent and self-assured than they are. Better understanding of speech styles will make managers better listeners and communicators. (SK)

  5. Focused sound: oncological therapy for transformed tissue

    International Nuclear Information System (INIS)

    Mares, C. E.; Cordova F, T.; Hernandez, A.

    2017-10-01

    The restlessness of the human being involves observing and being critical through their senses, in particular a disturbance in the environment cause vibrations that can be registered by the sense of hearing through the eardrum, if what it produces is in the frequency of the audible sound. The distinction of the sound of the other forms of energy transfer is that the waves of the same quickly involve the progressive return of displacements or vibrations of the molecules in the medium that propagates. In this work a sweep of frequencies was made from infra sound to ultrasound in plants of different types with different thicknesses and two people in order to find the resonance of each of them and compare it with the resonances registered in text, which allowed evaluate the secondary effect of sound focused on the tissue of the leaves and in particular of people. We consider that there is potential for this focused sound modality if it is at the resonance frequency of the transformed tissue as a means of oncological therapy without affecting the neighboring cells. (Author)

  6. The role of pars flaccida in human middle ear sound transmission.

    Science.gov (United States)

    Aritomo, H; Goode, R L; Gonzalez, J

    1988-04-01

    The role of the pars flaccida in middle ear sound transmission was studied with the use of twelve otoscopically normal, fresh, human temporal bones. Peak-to-peak umbo displacement in response to a constant sound pressure level at the tympanic membrane was measured with a noncontacting video measuring system capable of repeatable measurements down to 0.2 micron. Measurements were made before and after pars flaccida modifications at 18 frequencies between 100 and 4000 Hz. Four pars flaccida modifications were studied: (1) acoustic insulation of the pars flaccida to the ear canal with a silicone rubber baffle, (2) stiffening the pars flaccida with cyanoacrylate cement, (3) decreasing the tension of the pars flaccida with a nonperforating incision, and (4) perforation of the pars flaccida. All of the modifications (except the perforation) had a minimal effect on umbo displacement; this seems to imply that the pars flaccida has a minor acoustic role in human beings.

  7. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  8. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  9. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  10. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  11. Low-frequency sound exposure causes reversible long-term changes of cochlear transfer characteristics.

    Science.gov (United States)

    Drexl, Markus; Otto, Larissa; Wiegrebe, Lutz; Marquardt, Torsten; Gürkov, Robert; Krause, Eike

    2016-02-01

    Intense, low-frequency sound presented to the mammalian cochlea induces temporary changes of cochlear sensitivity, for which the term 'Bounce' phenomenon has been coined. Typical manifestations are slow oscillations of hearing thresholds or the level of otoacoustic emissions. It has been suggested that these alterations are caused by changes of the mechano-electrical transducer transfer function of outer hair cells (OHCs). Shape estimates of this transfer function can be derived from low-frequency-biased distortion product otoacoustic emissions (DPOAE). Here, we tracked the transfer function estimates before and after triggering a cochlear Bounce. Specifically, cubic DPOAEs, modulated by a low-frequency biasing tone, were followed over time before and after induction of the cochlear Bounce. Most subjects showed slow, biphasic changes of the transfer function estimates after low-frequency sound exposure relative to the preceding control period. Our data show that the operating point changes biphasically on the transfer function with an initial shift away from the inflection point followed by a shift towards the inflection point before returning to baseline values. Changes in transfer function and operating point lasted for about 180 s. Our results are consistent with the hypothesis that intense, low-frequency sound disturbs regulatory mechanisms in OHCs. The homeostatic readjustment of these mechanisms after low-frequency offset is reflected in slow oscillations of the estimated transfer functions. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Can listening to sound sequences facilitate movement? The potential for motor rehabilitation

    DEFF Research Database (Denmark)

    Bodak, Rebeka; Stewart, Lauren; Stephan, Marianne

    examining the impact of auditory exposure on the formation of new motor memories in healthy nonmusicians. Following an audiomotor mapping session, participants will be asked to listen to and memorise sequence A or sequence B in a sound-only task. Employing a congruent/incongruent crossover design......, participants’ motor performance will be tested using visuospatial stimuli to cue key presses, either to the congruent sequence they heard, or to the incongruent unfamiliar sequence. It is predicted that the congruent group will perform faster than the incongruent group. The findings of this study have...

  13. Heard Island and McDonald Islands Acoustic Plumes: Split-beam Echo sounder and Deep Tow Camera Observations of Gas Seeps on the Central Kerguelen Plateau

    Science.gov (United States)

    Watson, S. J.; Spain, E. A.; Coffin, M. F.; Whittaker, J. M.; Fox, J. M.; Bowie, A. R.

    2016-12-01

    Heard and McDonald islands (HIMI) are two active volcanic edifices on the Central Kerguelen Plateau. Scientists aboard the Heard Earth-Ocean-Biosphere Interactions voyage in early 2016 explored how this volcanic activity manifests itself near HIMI. Using Simrad EK60 split-beam echo sounder and deep tow camera data from RV Investigator, we recorded the distribution of seafloor emissions, providing the first direct evidence of seabed discharge around HIMI, mapping >244 acoustic plume signals. Northeast of Heard, three distinct plume clusters are associated with bubbles (towed camera) and the largest directly overlies a sub-seafloor opaque zone (sub-bottom profiler) with >140 zones observed within 6.5 km. Large temperature anomalies did not characterize any of the acoustic plumes where temperature data were recorded. We therefore suggest that these plumes are cold methane seeps. Acoustic properties - mean volume backscattering and target strength - and morphology - height, width, depth to surface - of plumes around McDonald resembled those northeast of Heard, also suggesting gas bubbles. We observed no bubbles on extremely limited towed camera data around McDonald; however, visibility was poor. The acoustic response of the plumes at different frequencies (120 kHz vs. 18 kHz), a technique used to classify water column scatterers, differed between HIMI, suggestiing dissimilar target size (bubble radii) distributions. Environmental context and temporal characteristics of the plumes differed between HIMI. Heard plumes were concentrated on flat, sediment rich plains, whereas around McDonald plumes emanated from sea knolls and mounds with hard volcanic seafloor. The Heard plumes were consistent temporally, while the McDonald plumes varied temporally possibly related to tides or subsurface processes. Our data and analyses suggest that HIMI acoustic plumes were likely caused by gas bubbles; however, the bubbles may originate from two or more distinct processes.

  14. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  15. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    Science.gov (United States)

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  16. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus and Humans with Transformations in Amplitude, Duration and Frequency.

    Directory of Open Access Journals (Sweden)

    Brian K Branstetter

    Full Text Available Bottlenose dolphins (Tursiops truncatus use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A in response to a specific sound (sound A for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  17. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  18. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  19. Human Sound Externalization in Reverberant Environments

    DEFF Research Database (Denmark)

    Catic, Jasmina

    In everyday environments, listeners perceive sound sources as externalized. In listening conditions where the spatial cues that are relevant for externalization are not represented correctly, such as when listening through headphones or hearing aids, a degraded perception of externalization may...... occur. In this thesis, the spatial cues that arise from a combined effect of filtering due to the head, torso, and pinna and the acoustic environment were analysed and the impact of such cues for the perception of externalization in different frequency regions was investigated. Distant sound sources...... were simulated via headphones using individualized binaural room impulse responses (BRIRs). An investigation of the influence of spectral content of a sound source on externalization showed that effective externalization cues are present across the entire frequency range. The fluctuation of interaural...

  20. Submarine geology and geomorphology of active Sub-Antarctic volcanoes: Heard and McDonald Islands

    Science.gov (United States)

    Watson, S. J.; Coffin, M. F.; Whittaker, J. M.; Lucieer, V.; Fox, J. M.; Carey, R.; Arculus, R. J.; Bowie, A. R.; Chase, Z.; Robertson, R.; Martin, T.; Cooke, F.

    2016-12-01

    Heard and McDonald Islands (HIMI) are World Heritage listed sub-Antarctic active volcanic islands in the Southern Indian Ocean. Built atop the Kerguelen Plateau by Neogene-Quaternary volcanism, HIMI represent subaerial exposures of the second largest submarine Large Igneous Province globally. Onshore, processes influencing island evolution include glaciers, weathering, volcanism, vertical tectonics and mass-wasting (Duncan et al. 2016). Waters surrounding HIMI are largely uncharted, due to their remote location. Hence, the extent to which these same processes shape the submarine environment around HIMI has not been investigated. In early 2016, we conducted marine geophysical and geologic surveys around HIMI aboard RV Investigator (IN2016_V01). Results show that volcanic and sedimentary features prominently trend east-west, likely a result of erosion by the eastward flowing Antarctic Circumpolar Current and tidal currents. However, spatial patterns of submarine volcanism and sediment distribution differ substantially between the islands. >70 sea knolls surround McDonald Island suggesting substantial submarine volcanism. Geophysical data reveals hard volcanic seafloor around McDonald Island, whereas Heard Island is characterised by sedimentary sequences tens of meters or more thick and iceberg scours - indicative of glacial processes. Differences in submarine geomorphology are likely due to the active glaciation of Heard Island and differing rock types (Heard: alkali basalt, McDonald: phonolite), and dominant products (clastics vs. lava). Variations may also reflect different magmatic plumbing systems beneath the two active volcanoes (Heard produces larger volumes of more focused lava, whilst McDonald extrudes smaller volumes of more evolved lavas from multiple vents across the edifice). Using geophysical data, corroborated with new and existing geologic data, we present the first geomorphic map revealing the processes that shape the submarine environment around HIMI.

  1. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  2. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  3. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  4. Retreat of Stephenson Glacier, Heard Island, from Remote Sensing and Field Observations

    Science.gov (United States)

    Mitchell, W.; Schmieder, R.

    2017-12-01

    Heard Island (Australian sub-Antarctic territory, 53 S, 73.5 E) is a volcanic island mantled in glaciers, and a UNESCO World Heritage Site both for its geology and ecology. Lying to the south of the Antarctic Convergence, the changes in response to climate seen on Heard Island are likely to be a bellwether for areas further south. Beginning in 1999, American satellites (Landsat 7, EO-1, and Landsat 8) have produced images of the island on a roughly weekly basis. Although the island is often shrouded in clouds, clear images of at least portions of the island are plentiful enough to create a nearly-annual record of the toe of Stephenson Glacier. During this period, Stephenson Glacier retreated by nearly 5 km, and lost 50% of its area. As a result of this retreat, a portion of the glacier now could be classified as a separate glacier. Additionally, in 2016, terrestrial photographs of Stephenson Glacier were taken during a three-week expedition to Heard Island, which accessed the Stephenson Glacier area by boat via the proglacial Stephenson Lagoon. During that work, sonar indicated some depths in the lagoon exceeding 100 m. Much of the loss in glacier length and area occurred during the mid- and late-2000s, with retreat rates slowing toward 2017. At this time, the glacier has retreated so that the main toe is not far from the base of a tall ice falls, while another toe—perhaps now a separate glacier—is land-based. This type of retreat pattern, fast over water and slower on land, is typical of other tidewater glaciers. Further monitoring of Stephenson Glacier and other glaciers on Heard Island will continue using Landsat 8.

  5. Fast detection of unexpected sound intensity decrements as revealed by human evoked potentials.

    Directory of Open Access Journals (Sweden)

    Heike Althen

    Full Text Available The detection of deviant sounds is a crucial function of the auditory system and is reflected by the automatically elicited mismatch negativity (MMN, an auditory evoked potential at 100 to 250 ms from stimulus onset. It has recently been shown that rarely occurring frequency and location deviants in an oddball paradigm trigger a more negative response than standard sounds at very early latencies in the middle latency response of the human auditory evoked potential. This fast and early ability of the auditory system is corroborated by the finding of neurons in the animal auditory cortex and subcortical structures, which restore their adapted responsiveness to standard sounds, when a rare change in a sound feature occurs. In this study, we investigated whether the detection of intensity deviants is also reflected at shorter latencies than those of the MMN. Auditory evoked potentials in response to click sounds were analyzed regarding the auditory brain stem response, the middle latency response (MLR and the MMN. Rare stimuli with a lower intensity level than standard stimuli elicited (in addition to an MMN a more negative potential in the MLR at the transition from the Na to the Pa component at circa 24 ms from stimulus onset. This finding, together with the studies about frequency and location changes, suggests that the early automatic detection of deviant sounds in an oddball paradigm is a general property of the auditory system.

  6. Effects of task-switching on neural representations of ambiguous sound input.

    Science.gov (United States)

    Sussman, Elyse S; Bregman, Albert S; Lee, Wei-Wei

    2014-11-01

    The ability to perceive discrete sound streams in the presence of competing sound sources relies on multiple mechanisms that organize the mixture of the auditory input entering the ears. Many studies have focused on mechanisms that contribute to integrating sounds that belong together into one perceptual stream (integration) and segregating those that come from different sound sources (segregation). However, little is known about mechanisms that allow us to perceive individual sound sources within a dynamically changing auditory scene, when the input may be ambiguous, and heard as either integrated or segregated. This study tested the question of whether focusing on one of two possible sound organizations suppressed representation of the alternative organization. We presented listeners with ambiguous input and cued them to switch between tasks that used either the integrated or the segregated percept. Electrophysiological measures indicated which organization was currently maintained in memory. If mutual exclusivity at the neural level was the rule, attention to one of two possible organizations would preclude neural representation of the other. However, significant MMNs were elicited to both the target organization and the unattended, alternative organization, along with the target-related P3b component elicited only to the designated target organization. Results thus indicate that both organizations (integrated and segregated) were simultaneously maintained in memory regardless of which task was performed. Focusing attention to one aspect of the sounds did not abolish the alternative, unattended organization when the stimulus input was ambiguous. In noisy environments, such as walking on a city street, rapid and flexible adaptive processes are needed to help facilitate rapid switching to different sound sources in the environment. Having multiple representations available to the attentive system would allow for such flexibility, needed in everyday situations to

  7. Tinnitus is associated with reduced sound level tolerance in adolescents with normal audiograms and otoacoustic emissions

    Science.gov (United States)

    Sanchez, Tanit Ganz; Moraes, Fernanda; Casseb, Juliana; Cota, Jaci; Freire, Katya; Roberts, Larry E.

    2016-01-01

    Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds. PMID:27265722

  8. Frictional Sound Analysis by Simulating the Human Arm Movement

    Directory of Open Access Journals (Sweden)

    Yosouf Khaldon

    2017-03-01

    Full Text Available Fabric noise generated by fabric-to-fabric friction is considered as one of the auditory disturbances that can have an impact on the quality of some textile products. For this reason, an instrument has been developed to analyse this phenomenon. The instrument is designed to simulate the relative movement of a human arm when walking. In order to understand the nature of the relative motion of a human arm, films of the upper half of the human body were taken. These films help to define the parameters required for movement simulation. These parameters are movement trajectory, movement velocity, arm pressure applied on the lateral part of the trunk and the friction area. After creating the instrument, a set of soundtracks related to the noise generated by fabric-to-fabric friction was recorded. The recordings were treated with a specific software to extract the sound parameters and the acoustic imprints of fabric were obtained.

  9. A Generalized Model for Indoor Location Estimation Using Environmental Sound from Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2018-02-01

    Full Text Available The indoor location of individuals is a key contextual variable for commercial and assisted location-based services and applications. Commercial centers and medical buildings (e.g., hospitals require location information of their users/patients to offer the services that are needed at the correct moment. Several approaches have been proposed to tackle this problem. In this paper, we present the development of an indoor location system which relies on the human activity recognition approach, using sound as an information source to infer the indoor location based on the contextual information of the activity that is realized at the moment. In this work, we analyze the sound information to estimate the location using the contextual information of the activity. A feature extraction approach to the sound signal is performed to feed a random forest algorithm in order to generate a model to estimate the location of the user. We evaluate the quality of the resulting model in terms of sensitivity and specificity for each location, and we also perform out-of-bag error estimation. Our experiments were carried out in five representative residential homes. Each home had four individual indoor rooms. Eleven activities (brewing coffee, cooking, eggs, taking a shower, etc. were performed to provide the contextual information. Experimental results show that developing an indoor location system (ILS that uses contextual information from human activities (identified with data provided from the environmental sound can achieve an estimation that is 95% correct.

  10. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  11. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  12. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  13. A "looming bias" in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization.

    Science.gov (United States)

    McCarthy, Lisa; Olsen, Kirk N

    2017-01-01

    Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.

  14. What's that sound? Matches with auditory long-term memory induce gamma activity in human EEG.

    Science.gov (United States)

    Lenz, Daniel; Schadow, Jeanette; Thaerig, Stefanie; Busch, Niko A; Herrmann, Christoph S

    2007-04-01

    In recent years the cognitive functions of human gamma-band activity (30-100 Hz) advanced continuously into scientific focus. Not only bottom-up driven influences on 40 Hz activity have been observed, but also top-down processes seem to modulate responses in this frequency band. Among the various functions that have been related to gamma activity a pivotal role has been assigned to memory processes. Visual experiments suggested that gamma activity is involved in matching visual input to memory representations. Based on these findings we hypothesized that such memory related modulations of gamma activity exist in the auditory modality, as well. Thus, we chose environmental sounds for which subjects already had a long-term memory (LTM) representation and compared them to unknown, but physically similar sounds. 21 subjects had to classify sounds as 'recognized' or 'unrecognized', while EEG was recorded. Our data show significantly stronger activity in the induced gamma-band for recognized sounds in the time window between 300 and 500 ms after stimulus onset with a central topography. The results suggest that induced gamma-band activity reflects the matches between sounds and their representations in auditory LTM.

  15. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  16. Ruptured Aortic Aneurysm Presenting as a Stridor

    Directory of Open Access Journals (Sweden)

    Feng Lin

    2010-06-01

    Full Text Available Stridor is an abnormal, high-pitched, whining breathing sound caused by a blockage in the throat or larynx that is usually heard in children. We describe an unusual case of an 81-year-old man brought to our emergency department with sudden onset of dyspnea and shortness of breath. Stridor could be heard without a stethoscope. We found a huge mass over the left upper chest on chest radiography, suggesting an aortic aneurysm. We believed that these symptoms were caused by a huge thoracic aortic aneurysm with trachea/bronchi compression. Chest computed tomography confirmed the diagnosis.

  17. Lumped parametric model of the human ear for sound transmission.

    Science.gov (United States)

    Feng, Bin; Gan, Rong Z

    2004-09-01

    A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.

  18. Science 101: What Causes Wind?

    Science.gov (United States)

    Robertson, William C.

    2010-01-01

    There's a quick and easy answer to this question. The Sun causes wind. Exactly how the Sun causes wind takes a bit to explain. We'll begin with what wind is. You've no doubt heard that wind is the motion of air molecules, which is true. Putting aside the huge leap of faith it takes for us to believe that we are experiencing the motion of millions…

  19. Mutation in the kv3.3 voltage-gated potassium channel causing spinocerebellar ataxia 13 disrupts sound-localization mechanisms.

    Directory of Open Access Journals (Sweden)

    John C Middlebrooks

    Full Text Available Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13. This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.

  20. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  1. Differential Intracochlear Sound Pressure Measurements in Human Temporal Bones with an Off-the-Shelf Sensor

    Directory of Open Access Journals (Sweden)

    Martin Grossöhmichen

    2016-01-01

    Full Text Available The standard method to determine the output level of acoustic and mechanical stimulation to the inner ear is measurement of vibration response of the stapes in human cadaveric temporal bones (TBs by laser Doppler vibrometry. However, this method is reliable only if the intact ossicular chain is stimulated. For other stimulation modes an alternative method is needed. The differential intracochlear sound pressure between scala vestibuli (SV and scala tympani (ST is assumed to correlate with excitation. Using a custom-made pressure sensor it has been successfully measured and used to determine the output level of acoustic and mechanical stimulation. To make this method generally accessible, an off-the-shelf pressure sensor (Samba Preclin 420 LP, Samba Sensors was tested here for intracochlear sound pressure measurements. During acoustic stimulation, intracochlear sound pressures were simultaneously measurable in SV and ST between 0.1 and 8 kHz with sufficient signal-to-noise ratios with this sensor. The pressure differences were comparable to results obtained with custom-made sensors. Our results demonstrated that the pressure sensor Samba Preclin 420 LP is usable for measurements of intracochlear sound pressures in SV and ST and for the determination of differential intracochlear sound pressures.

  2. Reading drift in flow rate sensors caused by steady sound waves; Desvios de leitura em sensores de vazao provocados por ondas sonoras estacionarias

    Energy Technology Data Exchange (ETDEWEB)

    Maximiano, Celso; Nieble, Marcio D. [Coordenadoria para Projetos Especiais (COPESP), Sao Paulo, SP (Brazil); Migliavacca, Sylvana C.P.; Silva, Eduardo R.F. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    1995-12-31

    The use of thermal sensors very common for the measurement of small flows of gases. In this kind of sensor a little tube forming a bypass is heated symmetrically, then the temperature distribution in the tube modifies with the mass flow along it. When a stationary wave appears in the principal tube it causes an oscillation of pressure around the average value. The sensor, located between two points of the principal tube, indicates not only the principal mass flow, but also that one caused by the difference of pressure induced by the sound wave. When the gas flows at low pressures the equipment indicates a value that do not correspond to the real. Tests and essays were realized by generating a sound wave in the principal tube, without mass flow, and the sensor detected flux. In order to solve this problem a wave-damper was constructed, installed and tested in the system and it worked satisfactory eliminating with efficiency the sound wave. (author). 2 refs., 3 figs.

  3. Submarine glacial landforms and interactions with volcanism around Sub-Antarctic Heard and McDonald Islands

    Science.gov (United States)

    Picard, K.; Watson, S. J.; Fox, J. M.; Post, A.; Whittaker, J. M.; Lucieer, V.; Carey, R.; Coffin, M. F.; Hodgson, D.; Hogan, K.; Graham, A. G. C.

    2017-12-01

    Unravelling the glacial history of Sub-Antarctic islands can provide clues to past climate and Antarctic ice sheet stability. The glacial history of many sub-Antarctic islands is poorly understood, including the Heard and McDonald Islands (HIMI) located on the Kerguelen Plateau in the southern Indian Ocean. The geomorphologic development of HIMI has involved a combination of construction via hotspot volcanism and mechanical erosion caused by waves, weather, and glaciers. Today, the 2.5 km2 McDonald Islands are not glacierised; in contrast, the 368 km2 Heard Island has 12 major glaciers, some extending from the summit of 2813 m to sea level. Historical accounts from Heard Island suggest that the glaciers were more extensive in the 1850s to 1870s, and have retreated at least 12% (33.89 km2) since 1997. However, surrounding bathymetry suggests a much more extensive previous glaciation of the HIMI region that encompassed 9,585 km2, likely dating back at least to the Last Glacial Maximum (LGM) ca. 26.5 -19 ka. We present analyses of multibeam bathymetry and backscatter data, acquired aboard RV Investigator in early 2016, that support the previous existence of an extensive icecap. These data reveal widespread ice-marginal and subglacial features including moraines, over-deepened troughs, drumlins and crag-and-tails. Glacial landforms suggest paleo-ice flow directions and a glacial extent that are consistent with previously documented broad scale morphological features. We identify >660 iceberg keel scours in water depths ranging from 150 - 530 m. The orientations of the iceberg keel scours reflect the predominantly east-flowing Antarctic Circumpolar Current and westerly winds in the region. 40Ar/39Ar dating of volcanic rocks from submarine volcanoes around McDonald Islands suggests that volcanism and glaciation coincided. The flat-topped morphology of these volcanoes may result from lava-ice interaction or erosion by glaciers post eruption during a time of extensive ice

  4. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  5. Cochlear neuropathy and the coding of supra-threshold sound.

    Science.gov (United States)

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  6. Cochlear Neuropathy and the Coding of Supra-threshold Sound

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds, behavioral ability in paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation, correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in subcortical steady-state responses in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  7. A comparison of ambient casino sound and music: effects on dissociation and on perceptions of elapsed time while playing slot machines.

    Science.gov (United States)

    Noseworthy, Theodore J; Finlay, Karen

    2009-09-01

    This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.

  8. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  9. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  10. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  11. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  12. The power to dismiss and the right to be heard under the common ...

    African Journals Online (AJOL)

    The power to dismiss and the right to be heard under the common law of Ghana: a need for satutory intervention. ... Journal of Business Research ... Based on the examination of selected Supreme Court cases, the paper demonstrates that, ...

  13. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  14. Feeling Heard & Understood in the Hospital Environment: Benchmarking Communication Quality Among Patients with Advanced Cancer Before and After Palliative Care Consultation.

    Science.gov (United States)

    Ingersoll, Luke T; Saeed, Fahad; Ladwig, Susan; Norton, Sally A; Anderson, Wendy; Alexander, Stewart C; Gramling, Robert

    2018-05-02

    Maximizing value in palliative care requires continued development and standardization of communication quality indicators. To describe the basic epidemiology of a newly-adopted patient-centered communication quality indicator for hospitalized palliative care patie9nts with advanced cancer. Cross-sectional analysis of 207 advanced cancer patients who received palliative care consultation at two medical centers in the United States. Participants completed the Heard & Understood quality indicator immediately before and the day following the initial palliative care consultation: "Over the past two days ["24 hours" for the post-consultation version], how much have you felt heard and understood by the doctors, nurses and hospital staff? Completely/Quite a Bit/Moderately/Slightly/Not at All". We categorized "Completely" as indicating ideal quality. Approximately one-third indicated ideal Heard & Understood quality before palliative care consultation. Age, financial security, emotional distress, preferences for comfort-longevity tradeoffs at end-of-life, and prognosis expectations were associated with pre-consultation quality. Among those with less-than-ideal quality at baseline, 56% rated feeling more Heard & Understood the day following palliative care consultation. The greatest pre-post improvement was among people who had unformed end-of-life treatment preferences or who reported having "no idea" about their prognosis at baseline. Most patients felt incompletely heard and understood at the time of referral to palliative care consultation and more than half improved following consultation. Feeling heard and understood is an important quality indicator sensitive to interventions to improve care and key variations in the patient experience. Copyright © 2018. Published by Elsevier Inc.

  15. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  16. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  17. Organizational root causes for human factor accidents

    International Nuclear Information System (INIS)

    Dougherty, D.T.

    1997-01-01

    Accident prevention techniques and technologies have evolved significantly throughout this century from the earliest establishment of standards and procedures to the safety engineering improvements the fruits of which we enjoy today. Most of the recent prevention efforts focused on humans and defining human factor causes of accidents. This paper builds upon the remarkable successes of the past by looking beyond the human's action in accident causation to the organizational factors that put the human in the position to cause the accident. This organizational approach crosses all functions and all career fields

  18. The Voice of the Turtle is Heard Programs to Develop Military Writers in the Field of Strategy

    Science.gov (United States)

    1966-04-08

    BENEFIT TO THE USER AS MAY ACCRUE. 8 April 1966 "THE VOICE OF THE TURTLE IS HEARD" PROGRAMS TO DEVELOP MILITARY WRITERS IN THE FIELD OF STRATEGY By...U USAWC RESEARCH ELEMENT (Research Paper) L’The Voice of the Turtle is Heard" Programs to Develop Military Writers in the Field of Strategy by Lt Col...extensively their own "original sources" of information. Such information as published is often nebulous , however, and as often fanciful as it is true

  19. The Last Seat in the House: The Story of Hanley Sound

    Science.gov (United States)

    Kane, John

    Prior to the rush of live outdoor sound during the 1950s, a young, audio-savvy Bill Hanley recognized certain inadequacies within the widely used public address system marketplace. Hanley's techniques allowed him to construct systems of sound that changed what the audience heard during outdoor events. Through my research, I reveal a new insight into how Hanley and those who worked at his business (Hanley Sound) had a direct, innovative influence on specific sound applications, which are now widely used and often taken for granted. Hanley's innovations shifted an existing public address, oral-based sound industry into a new area of technology rich with clarity and intelligibility. What makes his story so unique is that, because his relationship with sound was so intimate, it superseded his immediate economic, safety, and political concerns. He acted selflessly and with extreme focus. As Hanley's reputation grew, so did audience and performer demand for clear, audible concert sound. Over time, he would provide sound for some of the largest antiwar peace rallies and concerts in American history. Hanley worked in the thickness of extreme civil unrest, not typical for the average soundman of the day. Conveniently, Hanley's passion for clarity in sound also happened to occur when popular music transitioned into an important conveyor of political message through festivals. Since May 2011 I have been exploring the life of Bill Hanley, an innovative leader in sound. I use interdisciplinary approaches to uncover cultural, historical, social, political, and psychological occasions in Hanley's life that were imperative to his ongoing development. Filmed action sequences, such as talking head interviews (friends, family members, and professional colleagues) and historical archival 8 mm footage, and family photos and music ephemera, help uncover this qualitative ethnographic analysis of not only Bill's development but also the world around him. Reflective, intimate interviews

  20. Hearing Things: Music and Sounds the Traveller Heard and Didn’t Hear on the Grand Tour

    Directory of Open Access Journals (Sweden)

    Vanessa Agnew

    2012-11-01

    Full Text Available For Charles Burney, as for other Enlightenment scholars engaged in historicising music, the problem was not only how to reconstruct a history of something as ephemeral as music, but the more intractable one of cultural boundaries. Non-European music could be excluded from a general history on the grounds that it was so much noise and no music. The music of Egypt and classical antiquity, on the other hand, were likely ancestors of European music and clearly had to be accorded a place within the general history. But before that place could be determined, Burney and his contemporaries were faced with a stunning silence. What was Egyptian music? What were its instruments? What its sound? The paper examines the work of scholars like Burney and James Bruce and their efforts to reconstruct past music by traveling to exotic places. Travel and a form of historical reenactment emerge as central not only to eighteenth-century historical method, but central, too, to the reconstruction of past sonic worlds. This essay argues that this method remains available to contemporary scholars as well.

  1. Bubbles That Change the Speed of Sound

    Science.gov (United States)

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  2. Concurrent Acoustic Activation of the Medial Olivocochlear System Modifies the After-Effects of Intense Low-Frequency Sound on the Human Inner Ear.

    Science.gov (United States)

    Kugler, Kathrin; Wiegrebe, Lutz; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2015-12-01

    >Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term 'bounce phenomenon' has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca(2+) influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca(2+) concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca(2+) concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz-8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support

  3. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  4. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  5. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  6. A perceptual pitch boundary in a non-human primate

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-09-01

    Full Text Available Pitch is an auditory percept critical to the perception of music and speech, and for these harmonic sounds, pitch is closely related to the repetition rate of the acoustic wave. This paper reports a test of the assumption that non-human primates and especially rhesus monkeys perceive the pitch of these harmonic sounds much as humans do. A new procedure was developed to train macaques to discriminate the pitch of harmonic sounds and thereby demonstrate that the lower limit for pitch perception in macaques is close to 30 Hz, as it is in humans. Moreover, when the phases of successive harmonics are alternated to cause a pseudo-doubling of the repetition rate, the lower pitch boundary in macaques decreases substantially, as it does in humans. The results suggest that both species use neural firing times to discriminate pitch, at least for sounds with relatively low repetition rates.

  7. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  8. Echo, Not Quotation: What Conversation Analysis Reveals about Classroom Responses to Heard Poetry

    Science.gov (United States)

    Gordon, John

    2012-01-01

    This article applies conversation analysis to classroom talk-in-interaction where pupils respond to poetry they have heard. The phenomenon of repeating in discussion details from the poem, including patterns of delivery, is considered and named echo to distinguish it from quotation in writing. The phenomenon is significant to the pedagogy of…

  9. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    Science.gov (United States)

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds

  10. Effect of sound stimulion reciprocal interaction of antagonist muscles of lowe extremities in humans under vestibular loadе

    Directory of Open Access Journals (Sweden)

    I. V. Dregval

    2015-05-01

    Full Text Available Results of the research are evidence of changing muscles reflex activity of human lower extremity under the influence of sound stimulus of various frequency range together with the vestibular burden. The most change of the H-reflex was observed under the sound stimulus of 800 hertz. Not only the proprioceptive but auditory sensory system takes part in the regulation of the brain reflex activity. Existence of different labyrinths actions, according to the situation, on the interneuronic inhibitory ways of the postsynaptic inhibition of the salens muscle’s motoneurons is supposed.

  11. Scorescapes : on sound, environment and sonic consciousness

    NARCIS (Netherlands)

    Harris, Yolande

    2011-01-01

    This dissertation explores sound, its image and its role in relating humans and our technologies to the environment. It investigates two related questions: How does sound mediate our relationship to environment? And how can contemporary multidisciplinary art practices articulate and explore this

  12. Seen and Heard: Children's Rights in Early Childhood Education. Early Childhood Education Series

    Science.gov (United States)

    Hall, Ellen Lynn; Rudkin, Jennifer Kofkin

    2011-01-01

    Using examples from a Reggio-inspired school with children from ages 6 weeks to 6 years, the authors emphasize the importance of children's rights and our responsibility as adults to hear their voices. "Seen and Heard" summarizes research and theory pertaining to young children's rights in the United States, and offers strategies educators can use…

  13. On Sound: Reconstructing a Zhuangzian Perspective of Music

    Directory of Open Access Journals (Sweden)

    So Jeong Park

    2015-12-01

    Full Text Available A devotion to music in Chinese classical texts is worth noticing. Early Chinese thinkers saw music as a significant part of human experience and a core practice for philosophy. While Confucian endorsement of ritual and music has been discussed in the field, Daoist understanding of music was hardly explored. This paper will make a careful reading of the Xiánchí 咸池 music story in the Zhuangzi, one of the most interesting, but least noticed texts, and reconstruct a Zhuangzian perspective from it. While sounds had been regarded as mere building blocks of music and thus depreciated in the hierarchical understanding of music in the mainstream discourse of early China, sound is the alpha and omega of music in the Zhuangzian perspective. All kinds of sounds, both human and natural, are invited into musical discourse. Sound is regarded as the real source of our being moved by music, and therefore, musical consummation is depicted as embodiment through sound.

  14. Son et lumière: Sound and light effects on spatial distribution and swimming behavior in captive zebrafish.

    Science.gov (United States)

    Shafiei Sabet, Saeed; Van Dooren, Dirk; Slabbekoorn, Hans

    2016-05-01

    Aquatic and terrestrial habitats are heterogeneous by nature with respect to sound and light conditions. Fish may extract signals and exploit cues from both ambient modalities and they may also select their sound and light level of preference in free-ranging conditions. In recent decades, human activities in or near water have altered natural soundscapes and caused nocturnal light pollution to become more widespread. Artificial sound and light may cause anxiety, deterrence, disturbance or masking, but few studies have addressed in any detail how fishes respond to spatial variation in these two modalities. Here we investigated whether sound and light affected spatial distribution and swimming behavior of individual zebrafish that had a choice between two fish tanks: a treatment tank and a quiet and light escape tank. The treatments concerned a 2 × 2 design with noisy or quiet conditions and dim or bright light. Sound and light treatments did not induce spatial preferences for the treatment or escape tank, but caused various behavioral changes in both spatial distribution and swimming behavior within the treatment tank. Sound exposure led to more freezing and less time spent near the active speaker. Dim light conditions led to a lower number of crossings, more time spent in the upper layer and less time spent close to the tube for crossing. No interactions were found between sound and light conditions. This study highlights the potential relevance for studying multiple modalities when investigating fish behavior and further studies are needed to investigate whether similar patterns can be found for fish behavior in free-ranging conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Statistical aspects and risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  16. Measurement of energetic radiation caused by thunderstorm activities by a sounding balloon and ground observation

    Science.gov (United States)

    Torii, T.

    2015-12-01

    Energetic radiation caused by thunderstorm activity is observed at various places, such as the ground, high mountain areas, and artificial satellites. In order to investigate the radiation source and its energy distribution, we measured energetic radiation by a sounding balloon, and the ground observation. On the measurement inside/above the thundercloud, we conducted a sounding observation using a radiosonde mounted two GM tubes (for gamma-rays, and for beta/gamma-rays), in addition to meteorological instruments. The balloon passed through a region of strong echoes in a thundercloud shown by radar image, at which time an increase in counting rate of the GM tube about 2 orders of magnitude occurred at the altitude from 5 km to 7.5 km. Furthermore, the counting rate of two GM tubes indicated the tendency different depending on movement of a balloon. This result suggests that the ratio for the gamma-rays (energetic photons) of the beta-rays (energetic electrons) varies according to the place in the thundercloud. Furthermore, we carried out a ground observation of the energetic gamma rays during winter thunderstorm at a coastal area facing the Sea of Japan. Two types of the energetic radiation have been observed at this time. We report the outline of these measurements and analysis in the session of the AGU meeting.

  17. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  18. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  19. Structural and thermal behaviour of carious and sound powders of human tooth enamel and dentine

    International Nuclear Information System (INIS)

    Tiznado-Orozco, Gaby E; Garcia-Garcia, R; Reyes-Gasga, J

    2009-01-01

    Powder from carious human tooth enamel and dentine were structurally, chemically and thermally analysed and compared against those from sound (healthy) teeth. Structural and chemical analyses were performed using x-ray diffraction, energy-dispersive x-ray spectroscopy and transmission electron microscopy. Thermal analysis was carried out by thermogravimetric analysis, Fourier transform infrared spectroscopy and x-ray diffraction. Results demonstrate partially dissolved crystals of hydroxyapatite (HAP) with substitutions of Na, Mg, Cl and C, and a greater weight loss in carious dentine as compared with carious enamel. A greater amount of thermal decomposition is observed in carious dentine as compared with sound dentine, with major variations in the a-axis of the HAP unit cell than in the c-axis. Variations in shape and intensity of the OH - , CO 3 2- and PO 4 3- FTIR bands were also found.

  20. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  1. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  2. Human amygdala activation by the sound produced during dental treatment: A fMRI study

    Directory of Open Access Journals (Sweden)

    Jen-Fang Yu

    2015-01-01

    Full Text Available During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7 aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL. As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli.

  3. Jump in the amplitude of a sound wave associated with contraction of a nitrogen discharge

    International Nuclear Information System (INIS)

    Galechyan, G.A.; Mkrtchyan, A.R.; Tavakalyan, L.B.

    1993-01-01

    The use of a sound wave created by an external source and directed along the positive column of a nitrogen discharge in order to make the discharge pass to the contracted state is studied experimentally. A phenomenon involving a jump in the sound wave amplitude, caused by the discharge contraction, is observed and studied. It is established that the amplitude of the sound wave as a function of the discharge current near the jump exhibits hysteresis. It is shown that in the field of a high-intensity sound wave causing the discharge to expand eliminates the jump in the sound amplitude. The dependence of the growth time of the sound amplitude caused by the jump in this quantity on the sound wave intensity is determined. 24 refs., 4 figs., 1 tab

  4. Sound pressure gain produced by the human middle ear.

    Science.gov (United States)

    Kurokawa, H; Goode, R L

    1995-10-01

    The acoustic function of the middle ear is to match sound passing from the low impedance of air to the high impedance of cochlear fluid. Little information is available on the actual middle ear pressure gain in human beings. This article describes experiments on middle ear pressure gain in six fresh human temporal bones. Stapes footplate displacement and phase were measured with a laser Doppler vibrometer before and after removal of the tympanic membrane, malleus, and incus. Acoustic insulation of the round window with clay was performed. Umbo displacement was also measured before tympanic membrane removal to assess baseline tympanic membrane function. The middle ear has its major gain in the lower frequencies, with a peak near 0.9 kHz. The mean gain was 23.0 dB below 1.0 kHz, the resonant frequency of the middle ear; the mean peak gain was 26.6 dB. Above 1.0 kHz, the second pressure gain decreased at a rate of -8.6 dB/octave, with a mean gain of 6.5 dB at 4.0 kHz. Only a small amount of gain was present above 7.0 kHz. Significant individual differences in pressure gain were found between ears that appeared related to variations in tympanic membrane function and not to variations in cochlear impedance.

  5. Social sciences in Puget Sound recovery

    Science.gov (United States)

    Katharine F. Wellman; Kelly Biedenweg; Kathleen Wolf

    2014-01-01

    Advancing the recovery of large-scale ecosystems, such as the Puget Sound inWashington State, requires improved knowledge of the interdependencies between nature and humans in that basin region. As Biedenweg et al. (this issue) illustrate, human wellbeing and human behavior do not occur independently of the biophysical environment. Natural environments contribute to...

  6. Determination of knowledge of Turkish midwifery students about human papilloma virus infection and its vaccines.

    Science.gov (United States)

    Genc, Rabia Ekti; Sarican, Emine Serap; Turgay, Ayse San; Icke, Sibel; Sari, Dilek; Saydam, Birsen Karaca

    2013-01-01

    Human papilloma virus (HPV) is one of the most common sexually transmitted agents and its infection is the most established cause of cervical cancer. Midwives play a key position in the implementation of cervical cancer. This descriptive study aimed to determine the level of knowledge concerning HPV and HPV vaccination among 268 midwifery students. Data were collected between November 15 and 30, 2011, through a self-reported questionnaire. The mean age of participants was 20.75 ± 1.60. Among all students, 44.4% had heard of HPV, while 40.4% had heard of HPV vaccinatiob. The relationship between the midwifery student knowledge on HPV and HPV vaccine and their current educational year was significant (p=0.001). In conclusion midwifery students have moderate level of knowledge about HPV and its vaccine and relevant information should be included in their teaching curriculum.

  7. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  8. A Hearing-Based, Frequency Domain Sound Quality Model for Combined Aerodynamic and Power Transmission Response with Application to Rotorcraft Interior Noise

    Science.gov (United States)

    Sondkar, Pravin B.

    The severity of combined aerodynamics and power transmission response in high-speed, high power density systems such as a rotorcraft is still a major cause of annoyance in spite of recent advancement in passive, semi-active and active control. With further increase in the capacity and power of this class of machinery systems, the acoustic noise levels are expected to increase even more. To achieve further improvements in sound quality, a more refined understanding of the factors and attributes controlling human perception is needed. In the case of rotorcraft systems, the perceived quality of the interior sound field is a major determining factor of passenger comfort. Traditionally, this sound quality factor is determined by measuring the response of a chosen set of juries who are asked to compare their qualitative reactions to two or more sounds based on their subjective impressions. This type of testing is very time-consuming, costly, often inconsistent, and not useful for practical design purposes. Furthermore, there is no known universal model for sound quality. The primary aim of this research is to achieve significant improvements in quantifying the sound quality of combined aerodynamic and power transmission response in high-speed, high power density machinery systems such as a rotorcraft by applying relevant objective measures related to the spectral characteristics of the sound field. Two models have been proposed in this dissertation research. First, a classical multivariate regression analysis model based on currently known sound quality metrics as well some new metrics derived in this study is presented. Even though the analysis resulted in the best possible multivariate model as a measure of the acoustic noise quality, it lacks incorporation of human judgment mechanism. The regression model can change depending on specific application, nature of the sounds and types of juries used in the study. Also, it predicts only the averaged preference scores and

  9. Game Sound from Behind the Sofa

    DEFF Research Database (Denmark)

    Garner, Tom Alexander

    2013-01-01

    The central concern of this thesis is upon the processes by which human beings perceive sound and experience emotions within a computer video gameplay context. The potential of quantitative sound parameters to evoke and modulate emotional experience is explored, working towards the development...... that provide additional support of the hypothetical frameworks: an ecological process of fear, a fear-related model of virtual and real acoustic ecologies, and an embodied virtual acoustic ecology framework. It is intended that this thesis will clearly support more effective and efficient sound design...... practices and also improve awareness of the capacity of sound to generate significant emotional experiences during computer video gameplay. It is further hoped that this thesis will elucidate the potential of biometrics/psychophysiology to allow game designers to better understand the player and to move...

  10. New perspectives on mechanisms of sound generation in songbirds

    DEFF Research Database (Denmark)

    Goller, Franz; Larsen, Ole Næsbye

    2002-01-01

    -tone mechanism similar to human phonation with the labia forming a pneumatic valve. The classical avian model proposed that vibrations of the thin medial tympaniform membranes are the primary sound generating mechanism. As a direct test of these two hypotheses we ablated the medial tympaniform membranes in two......The physical mechanisms of sound generation in the vocal organ, the syrinx, of songbirds have been investigated mostly with indirect methods. Recent direct endoscopic observation identified vibrations of the labia as the principal sound source. This model suggests sound generation in a pulse...... atmosphere) as well as direct (labial vibration during tonal sound) measurements of syringeal vibrations support a vibration-based soundgenerating mechanism even for tonal sounds....

  11. Characteristics of epilepsy patients and caregivers who either have or have not heard of SUDEP.

    Science.gov (United States)

    Kroner, Barbara L; Wright, Cyndi; Friedman, Daniel; Macher, Kim; Preiss, Liliana; Misajon, Jade; Devinsky, Orrin

    2014-10-01

    Describe the characteristics of persons with epilepsy (PWEs) and caregivers that have or have not heard of sudden unexpected death in epilepsy (SUDEP) prior to completing a survey through the Internet or in the clinical setting. An online survey for adult PWEs and caregivers was solicited by e-mail and newsletter to Epilepsy Therapy Project members. A similar survey was implemented in a clinic setting of a community hospital. The survey asked about seizure characteristics, epilepsy management, fear of death, and familiarity with the term SUDEP. Respondents that never heard of SUDEP read a definition and responded to questions about their initial reactions. Surveys from 1,392 PWEs and 611 caregivers recruited through an epilepsy Website and a clinic demonstrated that Internet respondents were much more likely to have heard about SUDEP than the clinic population (71.1% vs. 38.8%; p vs. 65.2%; p fear, anxiety, and sadness after first hearing of SUDEP, they wanted to discuss it with their doctor. Persons with epilepsy, and especially their caregivers, often worry that the PWEs may die of epilepsy or seizures. This worry escalated with knowledge of SUDEP and increased epilepsy severity. Approximately half of PWEs and caregivers believed that knowledge about SUDEP would influence their epilepsy management. Our results may help epilepsy care providers determine when to facilitate a discussion about epilepsy-related mortality and SUDEP among patients and caregivers, and to educate those at high risk about the importance of seizure control as well as reduce fears about death in patients with well-controlled and nonconvulsive epilepsies. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  12. Darwin's Sacred Cause

    DEFF Research Database (Denmark)

    Kjærgaard, Peter C.

    2009-01-01

    As we are being flooded by Darwin lollipops, t-shirts, quills and stamps it is becoming increasingly difficult to be heard or seen in the commercialised celebration in 2009. Some are in the business for the science, but a lot are in it for profit. Accordingly, the Darwin industry has left the hands...... of scholarly specialists and been appropriated by money makers. One could not help thinking about this as, in the autumn of 2008, the publisher began hyping Darwin's Sacred Cause as ‘one of the major contributions to the worldwide Darwin anniversary celebrations in 2009' Udgivelsesdato: February...

  13. Tool-use-associated sound in the evolution of language.

    Science.gov (United States)

    Larsson, Matz

    2015-09-01

    Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.

  14. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  15. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    Science.gov (United States)

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  16. 800 Fellows at CERN: Make your voice heard!

    CERN Multimedia

    Staff Association

    2018-01-01

    The financial and social conditions of the CERN personnel are determined in agreement between the CERN Management and the Staff Association (SA), the statutory body representing the personnel. The Staff Association is mandated to serve and defend the economic, social, professional and moral interests of its members and the entire CERN personnel. Fellows are members of the personnel employed by CERN (MPE), and should be listened to and heard during discussions that concern their financial and social conditions and, more widely, their working and employment conditions. The Staff Association wants to hear from you! At the end of 2017, 50 delegates were elected to the Staff Council, including four fellows. These delegates are your spokespersons within the Staff Association and, as such, they represent you in relations with the Management and the Member States. How can you make a difference? In 2015, during the latest five-yearly review, the aim of which was to review the financial and social conditions of the mem...

  17. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  18. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  19. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  20. Representing Immigration Detainees: The Juxtaposition of Image and Sound in "Border Country"

    Directory of Open Access Journals (Sweden)

    Melanie Friend

    2010-05-01

    Full Text Available This paper discusses the four-year (2003-2007 research process towards my exhibition and publication "Border Country", which focuses on the experience of immigration detainees (appellant or "failed" asylum seekers in the UK's "immigration removal centres". I discuss my earlier exhibition "Homes and Gardens: Documenting the Invisible" which focused on the repression in Kosovo under the Milošević regime, and the difficulties of representing the "hidden violence" which led to the adoption of a particular sound/image structure for the exhibition. I discuss how I then chose to work with a similar sound/image framework for "Border Country" and the aesthetic and conceptual considerations involved. I discuss the decision to expand the focus of the exhibition from one individual detainee to eleven, and to omit the photographic portraits of detainees from the exhibition for ethical and conceptual reasons. I finally produced a juxtaposition of photographs of immigration removal centre landscapes and interiors (devoid of people with a soundtrack of oral testimonies. The voices of individual detainees could be heard at listening stations within the gallery spaces or on the publication's audio CD. Within this research process I also discuss my interview methodology and questions of power imbalance between photographer/artist and incarcerated asylum seekers. URN: urn:nbn:de:0114-fqs1002334

  1. Sounds like Team Spirit

    Science.gov (United States)

    Hoffman, Edward

    2002-01-01

    trying to improve on what they've done before. Second, success in any endeavor stems from people who know how to interpret a composition to sound beautiful when played in a different style. For Knowledge Sharing to work, it must be adapted, reinterpreted, shaped and played with at the centers. In this regard, we've been blessed with another crazy, passionate, inspired artist named Claire Smith. Claire has turned Ames Research Center in California into APPL-west. She is so good and committed to what she does that I just refer people to her whenever they have questions about implementing project management development at the field level. Finally, any great effort requires talented people working behind the scenes, the people who formulate a business approach and know how to manage the money so that the music gets heard. I have known many brilliant and creative people with a ton of ideas that never take off due to an inability to work the business. Again, the Knowledge Sharing team has been fortunate to have competent and passionate people, specifically Tony Maturo and his procurement team at Goddard Space Flight Center, to make sure the process is in place to support the effort. This kind of support is every bit as crucial as the activity itself, and the efforts and creativity that go into successful procurement and contracting is a vital ingredient of this successful team.

  2. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  3. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  4. Sound Heart: Spiritual Nursing Care Model from Religious Viewpoint.

    Science.gov (United States)

    Asadzandi, Minoo

    2017-12-01

    Different methods of epistemology create different philosophical views. None of the nursing theories have employed the revelational epistemology and the philosophical views of Abrahamic religions. According to Abrahamic religions, the universe and human being have been created based on God's affection. Human being should deserve the position of God's representative on earth after achieving all ethical merits. Humans have willpower to shape their destiny by choosing manner of their relationship with God, people, themselves and the whole universe. They can adopt the right behavior by giving a divine color to their thoughts and intentions and thus attain peace and serenity in their heart. Health means having a sound heart (calm spirit with a sense of hope and love, security and happiness) that is achievable through faith and piety. Moral vices lead to diseases. Human beings are able to purge their inside (heart) through establishing a relationship with God and then take actions to reform the outside world. The worlds are run by God's will based on prudence and mercy. All events happen with God's authorization, and human beings have to respond to them. Nurses should try to recognize the patient's spiritual response to illness that can appear as symptoms of an unsound heart (fear, sadness, disappointment, anger, jealousy, cruelty, grudge, suspicion, etc.) due to the pains caused by illness and then alleviate the patient's suffering by appropriate approaches. Nurses help the patient to achieve the sound heart by hope in divine mercy and love, and they help the patient see good in any evil and relieve their fear and sadness by viewing their illness positively and then attain the status of calm, satisfaction, peace and serenity in their heart and being content with the divine fate. By invitation to religious morality, the model leads the patients to spiritual health.

  5. Saúde ocupacional: considerações a respeito da perda auditiva induzida por ruído e da disfonia

    Directory of Open Access Journals (Sweden)

    Laura Corrêa de Barros

    2003-03-01

    Full Text Available This study provides some information about occupational health, more specifically about noise-induced hearing loss and voice disorders caused by the attempt to overcome background noise. Repeated exposures to excessive sound levels can lead to noise-induced hearing loss and voice disorders. Several studies have been conducted in order to suggest best means to control noise. Among the professionals involved in the hearing loss prevention are the industrial engineers and operations managers. This study also suggests that many of the workers that perform their job in a noisy place have to raise their voices to be heard over the sound environment, which could cause voice disorders. Therefore, some considerations have been made about the aspects of noise control projects and ways to prevent hearing loss and voice disorders in a noisy environment. Keywords: excessive sound levels, hearing loss, voice disorders.

  6. Sound modes in hot nuclear matter

    International Nuclear Information System (INIS)

    Kolomietz, V. M.; Shlomo, S.

    2001-01-01

    The propagation of the isoscalar and isovector sound modes in a hot nuclear matter is considered. The approach is based on the collisional kinetic theory and takes into account the temperature and memory effects. It is shown that the sound velocity and the attenuation coefficient are significantly influenced by the Fermi surface distortion (FSD). The corresponding influence is much stronger for the isoscalar mode than for the isovector one. The memory effects cause a nonmonotonous behavior of the attenuation coefficient as a function of the relaxation time leading to a zero-to-first sound transition with increasing temperature. The mixing of both the isoscalar and the isovector sound modes in an asymmetric nuclear matter is evaluated. The condition for the bulk instability and the instability growth rate in the presence of the memory effects is studied. It is shown that both the FSD and the relaxation processes lead to a shift of the maximum of the instability growth rate to the longer-wavelength region

  7. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  8. Realtime synthesized sword-sounds in Wii computer games

    DEFF Research Database (Denmark)

    Böttcher, Niels

    This paper presents the current work carried out on an interactive sword fighting game, developed for the Wii controller. The aim of the work is to develop highly interactive action-sound, which is closely mapped to the physical actions of the player. The interactive sword sound is developed using...... a combination of granular synthesis and subtractive synthesis simulating wind. The aim of the work is to test if more interactive sound can affect the way humans interact physically with their body, when playing games with controllers such as the Wii remote....

  9. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Science.gov (United States)

    Kaamin, Masiri; Farah Atiqah Ahmad, Nor; Ngadiman, Norhayati; Kadir, Aslila Abdul; Razali, Siti Nooraiin Mohd; Mokhtar, Mardiha; Sahat, Suhaila

    2018-03-01

    Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT). Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  10. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Directory of Open Access Journals (Sweden)

    Kaamin Masiri

    2018-01-01

    Full Text Available Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT. Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  11. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  12. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  13. Human-caused Indo-Pacific warm pool expansion.

    Science.gov (United States)

    Weller, Evan; Min, Seung-Ki; Cai, Wenju; Zwiers, Francis W; Kim, Yeon-Hee; Lee, Donghyun

    2016-07-01

    The Indo-Pacific warm pool (IPWP) has warmed and grown substantially during the past century. The IPWP is Earth's largest region of warm sea surface temperatures (SSTs), has the highest rainfall, and is fundamental to global atmospheric circulation and hydrological cycle. The region has also experienced the world's highest rates of sea-level rise in recent decades, indicating large increases in ocean heat content and leading to substantial impacts on small island states in the region. Previous studies have considered mechanisms for the basin-scale ocean warming, but not the causes of the observed IPWP expansion, where expansion in the Indian Ocean has far exceeded that in the Pacific Ocean. We identify human and natural contributions to the observed IPWP changes since the 1950s by comparing observations with climate model simulations using an optimal fingerprinting technique. Greenhouse gas forcing is found to be the dominant cause of the observed increases in IPWP intensity and size, whereas natural fluctuations associated with the Pacific Decadal Oscillation have played a smaller yet significant role. Further, we show that the shape and impact of human-induced IPWP growth could be asymmetric between the Indian and Pacific basins, the causes of which remain uncertain. Human-induced changes in the IPWP have important implications for understanding and projecting related changes in monsoonal rainfall, and frequency or intensity of tropical storms, which have profound socioeconomic consequences.

  14. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  15. ANALYSIS OF SOUND PRESSURE LEVEL (SPL AND LAY OUT OF ENGINES IN THE FACTORY

    Directory of Open Access Journals (Sweden)

    Wijianto Wijianto

    2017-01-01

    Full Text Available Modeling layout of engines in the factory is very useful to know how many dB sound pressure level that are occur in the building in order to avoid hearing damage of employees that are caused by noise. The objective of this research is to know how many dB sound pressure level that are occur in the factory with engine composition such as boiler, diesel, turbine, motor and gear box with dimension of building are 40 m length, 35 m width and 10 m height. With MATLAB analysis can be obtain that the highest SPL is 104.7 dB and the lowest is 93.5 dB, so, this range are dangerous for human hearing. To avoid hearing damage in this area, employees must use hearing protector.

  16. Preventing marine accidents caused by technology-induced human error

    OpenAIRE

    Bielić, Toni; Hasanspahić, Nermin; Čulin, Jelena

    2017-01-01

    The objective of embedding technology on board ships, to improve safety, is not fully accomplished. The paper studies marine accidents caused by human error resulting from improper human-technology interaction. The aim of the paper is to propose measures to prevent reoccurrence of such accidents. This study analyses the marine accident reports issued by Marine Accidents Investigation Branch covering the period from 2012 to 2014. The factors that caused these accidents are examined and categor...

  17. Thinking soap But Speaking ‘oaps’. The Sound Preparation Period: Backward Calculation From Utterance to Muscle Innervation

    Directory of Open Access Journals (Sweden)

    Nora Wiedenmann

    2010-04-01

    Full Text Available

    In this article’s model—on speech and on speech errors, dyscoordinations, and disorders—, the time-course from the muscle innervation impetuses to the utterance of sounds as intended for canonical speech sound sequences is calculated backward. This time-course is shown as the sum of all the known physiological durations of speech sounds and speech gestures that are necessary to produce an utterance. The model introduces two internal clocks, based on positive or negative factors, representing certain physiologically-based time-courses during the sound preparation period (Lautvorspann. The use of these internal clocks show that speech gestures—like other motor activities—work according to a simple serialization principle: Under non-default conditions,
    alterations of the time-courses may cause speech errors of sound serialization, dyscoordinations of sounds as observed during first language acquisition, or speech disorders as pathological cases. These alterations of the time-course are modelled by varying the two internal-clock factors. The calculation of time-courses uses as default values the sound durations of the context-dependent Munich PHONDAT Database of Spoken German (see Appendix 4. As a new, human approach, this calculation agrees mathematically with the approach of Linear Programming / Operations Research. This work gives strong support to the fairly old suspicion (of 1908 of the famous Austrian speech error scientist Meringer [15], namely that one mostly thinks and articulates in a different serialization than is audible from one’s uttered sound sequences.

  18. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  19. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  20. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  1. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  2. On the role of sound in the strong Langmuir turbulence

    International Nuclear Information System (INIS)

    Malkin, V.M.

    1989-01-01

    The main directions in the precision of the theory of strong Langmuir turbulence caused by the necessity of account of sound waves in plasma are preseted. In particular the effect of conversion of short-wave modulations in Langmuir waves induced by sound waves, are briefly described. 8 refs

  3. Human Syngamosis as an Uncommon Cause of Chronic Cough

    Directory of Open Access Journals (Sweden)

    Pulcherio, Janaína Oliveira Bentivi

    2013-09-01

    Full Text Available Introduction: Chronic cough may represent a diagnostic challenge. Chronic parasitism of upper airways is an unusual cause. Objective: To describe a case of human syngamosis as an uncommon cause of dry cough. Case Report: An endoscopic exam performed in a woman suffering of chronic cough revealed a Y-shaped worm in the larynx identified as Syngamus laryngeus. Discussion: This nematode parasitizes the upper respiratory tract of many animals including humans. The diagnosis is performed by the examination of the worm expelled by cough or by endoscopy. Endoscopic exam is easy to perform and is essential in the diagnosis of causes of chronic cough, even uncommon entities. Removal is the only efficient treatment.

  4. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Directory of Open Access Journals (Sweden)

    Arash Aryani

    Full Text Available Most language users agree that some words sound harsh (e.g. grotesque whereas others sound soft and pleasing (e.g. lagoon. While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning and words' sound (i.e. affective sound; both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant and arousal (ranging from calm to excited. We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1. In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a, and through acoustic models that we implemented based on pseudoword materials (study2b. In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss' feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and

  5. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Science.gov (United States)

    Aryani, Arash; Conrad, Markus; Schmidtke, David; Jacobs, Arthur

    2018-01-01

    Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they

  6. The Harley effect : Internal and external factors that facilitate positive experiences with product sounds

    NARCIS (Netherlands)

    Ozcan Vieira, E.

    2014-01-01

    Everyday activities are laden with emotional experiences involving sound. Our interactions with products (shavers, hairdryers, electric drills) often cause sounds that are typically unpleasant to the ear. Yet, we may get excited with the sound of an accelerating Harley Davidson because the rumbling

  7. LHC? Of course we’ve heard of the LHC!

    CERN Multimedia

    2009-01-01

    Well, more or less. After its first outing in Meyrin (see last Bulletin issue), our street poll hits the streets of Divonne-les-Bains and the corridors of the University of Geneva. While many have heard of the LHC, the raison d’être of this "scientific whatsit" often remains a mystery.On first questioning, the "man-in-the-street" always pleads ignorance. "Lausanne Hockey Club?" The acronym LHC is not yet imprinted on people’s minds. "Erm, Left-Handed thingamajig?" But as soon as we mention the word "CERN", the accelerator pops straight into people’s minds. Variously referred to as "the circle" or "the ring", it makes you wonder whether people would have been so aware of the LHC if it had been shaped like a square. Size is another thing people remember: "It’s the world’s biggest. Up to now…" As for its purpose, well that’s another kettle of fish. &...

  8. Human-specific HERV-K insertion causes genomic variations in the human genome.

    Directory of Open Access Journals (Sweden)

    Wonseok Shin

    Full Text Available Human endogenous retroviruses (HERV sequences account for about 8% of the human genome. Through comparative genomics and literature mining, we identified a total of 29 human-specific HERV-K insertions. We characterized them focusing on their structure and flanking sequence. The results showed that four of the human-specific HERV-K insertions deleted human genomic sequences via non-classical insertion mechanisms. Interestingly, two of the human-specific HERV-K insertion loci contained two HERV-K internals and three LTR elements, a pattern which could be explained by LTR-LTR ectopic recombination or template switching. In addition, we conducted a polymorphic test and observed that twelve out of the 29 elements are polymorphic in the human population. In conclusion, human-specific HERV-K elements have inserted into human genome since the divergence of human and chimpanzee, causing human genomic changes. Thus, we believe that human-specific HERV-K activity has contributed to the genomic divergence between humans and chimpanzees, as well as within the human population.

  9. Modulation of the sound press level by the treatment of polymer diaphragms through ion implantation method

    International Nuclear Information System (INIS)

    Yeo, Sunmog; Park, Jaewon; Lee, Hojae

    2010-01-01

    We present two different surface modification treatments, an ion implantation, and an ion beam mixing, and show that the surface modifications caused by these treatments are useful tools to modulate the sound press level. The ion implantations on various polymer diaphragms cause an increase in the resonant frequency so that the sound press level is lowered at low frequencies. On the contrary, a Cu or Fe 2 O 3 coating by using an ion beam mixing method causes a decrease in the resonant frequency, resulting in a high sound press level at low frequencies. We discuss the physical reasons for the change in the sound press level due to the ion-implantation methods.

  10. Bubbles that Change the Speed of Sound

    Science.gov (United States)

    Planinšič, Gorazd; Etkina, Eugenia

    2012-11-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."2 In this paper we describe a simple and robust experiment that allows an easy audio and visual demonstration of the same effect (unfortunately without the chocolate) and offers several possibilities for student investigations. In addition to the demonstration of the above effect, the experiments described below provide an excellent opportunity for students to devise and test explanations with simple equipment.

  11. Whistleblowing Need not Occur if Internal Voices Are Heard: From Deaf Effect to Hearer Courage

    Science.gov (United States)

    Cleary, Sonja R.; Doyle, Kerrie E.

    2016-01-01

    Whistleblowing by health professionals is an infrequent and extraordinary event and need not occur if internal voices are heard. Mannion and Davies’ editorial on "Cultures of Silence and Cultures of Voice: The Role of Whistleblowing in Healthcare Organisations" asks the question whether whistleblowing ameliorates or exacerbates the ‘deaf effect’ prevalent in healthcare organisations. This commentary argues that the focus should remain on internal processes and hearer courage . PMID:26673652

  12. A massive haemothorax as an unusual complication of infective endocarditis caused by Streptococcus sanguinis.

    Science.gov (United States)

    Kim, Kyoung Jin; Lee, Kang Won; Choi, Ju Hee; Sohn, Jang Wook; Kim, Min Ja; Yoon, Young Kyung

    2016-08-01

    Infective endocarditis involving the tricuspid valve is an uncommon condition, and a consequent haemothorax associated with pulmonary embolism is extremely rare. Particularly, there are no guidelines for the management of this complication. We describe a rare case of pulmonary embolism and infarction followed by a haemothorax due to infective endocarditis of the tricuspid valve caused by Streptococcus sanguinis. A 25-year-old man with a ventricular septal defect (VSD) presented with fever. On physical examination, his body temperature was 38.8 °C, and a grade III holosystolic murmur was heard. A chest X-ray did not reveal any specific findings. A transoesophageal echocardiogram showed a perimembranous VSD and echogenic material attached to the tricuspid valve. All blood samples drawn from three different sites yielded growth of pan-susceptible S. sanguinis in culture bottles. On day 12 of hospitalization, the patient complained of pleuritic chest pain without fever. Physical examination revealed reduced breathing sounds and dullness in the lower left thorax. On his chest computed tomography scan, pleural effusion with focal infarction and pulmonary embolism were noted on the left lower lung. Thoracentesis indicated the presence of a haemothorax. Our case was successfully treated using antibiotic therapy alone with adjunctive chest tube insertion, rather than with anticoagulation therapy for pulmonary embolism or cardiac surgery. When treating infective endocarditis caused by S. sanguinis, clinicians should include haemothorax in the differential diagnosis of patients complaining of sudden chest pain.

  13. Social and environmental sustainability in large-scale coastal zones: Taking an issue-based approach to the implementation of the Prince William Sound sustainable human use framework

    Science.gov (United States)

    Dale J. Blahna; Aaron Poe; Courtney Brown; Clare M. Ryan; H. Randy Gimblett

    2017-01-01

    Following the grounding of the Exxon Valdez in 1989, a sustainable human use framework (human use framework) for Prince William Sound (PWS), AK was developed by the Chugach National Forest after concerns emerged about the social and environmental impacts of expanding human use due to cleanup activities and increased recreation visitation. A practical, issue-based...

  14. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    Science.gov (United States)

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  15. Decoding the neural signatures of emotions expressed through sound.

    Science.gov (United States)

    Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T

    2018-03-01

    Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  17. Have You Heard of Schistosomiasis? Knowledge, Attitudes and Practices in Nampula Province, Mozambique.

    Science.gov (United States)

    Rassi, Christian; Kajungu, Dan; Martin, Sandrine; Arroz, Jorge; Tallant, Jamie; Zegers de Beyl, Celine; Counihan, Helen; Newell, James N; Phillips, Anna; Whitton, Jane; Muloliwa, Artur Manuel; Graham, Kirstie

    2016-03-01

    Schistosomiasis is a parasitic disease which affects almost 300 million people worldwide each year. It is highly endemic in Mozambique. Prevention and control of schistosomiasis relies mainly on mass drug administration (MDA), as well as adoption of basic sanitation practices. Individual and community perceptions of schistosomiasis are likely to have a significant effect on prevention and control efforts. In order to establish a baseline to evaluate a community engagement intervention with a focus on schistosomiasis, a survey to determine knowledge, attitudes and practices relating to the disease was conducted. A representative cross-sectional household survey was carried out in four districts of Nampula province, Mozambique. Interviews were conducted in a total of 791 households, using a structured questionnaire. While awareness of schistosomiasis was high (91%), correct knowledge of how it is acquired (18%), transmitted (26%) and prevented (13%) was low among those who had heard of the disease. Misconceptions, such as the belief that schistosomiasis is transmitted through sexual contact (27%), were common. Only about a third of those who were aware of the disease stated that they practiced a protective behaviour and only a minority of those (39%) reported an effective behaviour. Despite several rounds of MDA for schistosomiasis in the recent past, only a small minority of households with children reported that at least one of them had received a drug to treat the disease (9%). Poor knowledge of the causes of schistosomiasis and how to prevent it, coupled with persisting misconceptions, continue to pose barriers to effective disease prevention and control. To achieve high levels of uptake of MDA and adoption of protective behaviours, it will be essential to engage individuals and communities, improving their understanding of the causes and symptoms of schistosomiasis, recommended prevention mechanisms and the rationale behind MDA.

  18. Analysis of the acoustic sound in MRI

    Energy Technology Data Exchange (ETDEWEB)

    Wada, Tetsuro; Hara, Akira; Kusakari, Jun; Yoshioka, Hiroshi; Niitsu, Mamoru; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Ase, Yuji

    1999-04-01

    The noise level and power spectra of the acoustic sound exposed during the examination of Magnetic Resonance Imaging (MRI) using a MRI scanner (Philips Gyroscan 1.5 T) were measured at the position of the human auricle. The overall noise levels on T1-weighted images and T2-weighted images with Spin Echo were 105 dB and 98 dB, respectively. The overall noise level on T2-weighted images with Turbo Spin Echo was 110 dB. Fourier analysis revealed energy peaks ranging from 225 to 325 Hz and a steep high frequency cutoff for each pulse sequence. The MRI noise was not likely to cause permanent threshold shift. However, because of the inter-subject variation in susceptibility to acoustic trauma and to exclude the anxiety in patients, ear protectors were recommended for all patients during MRI testing. (author)

  19. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Science.gov (United States)

    Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M

    2012-01-01

    Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial

  20. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Directory of Open Access Journals (Sweden)

    Jesse A Port

    Full Text Available Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp., γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used

  1. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  2. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  3. Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams.

    Science.gov (United States)

    Centanni, Tracy Michelle; Booker, Anne B; Chen, Fuyi; Sloan, Andrew M; Carraway, Ryan S; Rennaker, Robert L; LoTurco, Joseph J; Kilgard, Michael P

    2016-04-27

    Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing

  4. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  5. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  6. Effect of thermal-treatment sequence on sound absorbing and mechanical properties of porous sound-absorbing/thermal-insulating composites

    Directory of Open Access Journals (Sweden)

    Huang Chen-Hung

    2016-01-01

    Full Text Available Due to recent rapid commercial and industrial development, mechanical equipment is supplemented massively in the factory and thus mechanical operation causes noise which distresses living at home. In livelihood, neighborhood, transportation equipment, jobsite construction noises impact on quality of life not only factory noise. This study aims to preparation technique and property evaluation of porous sound-absorbing/thermal-insulating composites. Hollow three-dimensional crimp PET fibers blended with low-melting PET fibers were fabricated into hollow PET/low-melting PET nonwoven after opening, blending, carding, lapping and needle-bonding process. Then, hollow PET/low-melting PET nonwovens were laminated into sound-absorbing/thermal-insulating composites by changing sequence of needle-bonding and thermal-treatment. The optimal thermal-treated sequence was found by tensile strength, tearing strength, sound-absorbing coefficient and thermal conductivity coefficient tests of porous composites.

  7. Sound classification of dwellings – A diversity of national schemes in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Sound classification schemes for dwellings exist in ten countries in Europe, typically prepared and published as national standards. The schemes define quality classes intended to reflect different levels of acoustical comfort. The main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. This paper presents the sound classification schemes in Europe and compares the class criteria for sound insulation between dwellings. The schemes have been implemented and revised gradually since the early 1990s. However, due to lack...... constructions fulfilling different classes. The current variety of descriptors and classes also causes trade barriers. Thus, there is a need to harmonize characteristics of the schemes, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing...

  8. Regional circulation around Heard and McDonald Islands and through the Fawn Trough, central Kerguelen Plateau

    Science.gov (United States)

    van Wijk, Esmee M.; Rintoul, Stephen R.; Ronai, Belinda M.; Williams, Guy D.

    2010-05-01

    The fine-scale circulation around the Heard and McDonald Islands and through the Fawn Trough, Kerguelen Plateau, is described using data from three high-resolution CTD sections, Argo floats and satellite maps of chlorophyll a, sea surface temperature (SST) and absolute sea surface height (SSH). We confirm that the Polar Front (PF) is split into two branches over the Kerguelen Plateau, with the NPF crossing the north-eastern limits of our survey carrying 25 Sv to the southeast. The SPF was associated with a strong eastward-flowing jet carrying 12 Sv of baroclinic transport through the deepest part of Fawn Trough (relative to the bottom). As the section was terminated midway through the trough this estimate is very likely to be a lower bound for the total transport. We demonstrate that the SPF contributes to the Fawn Trough Current identified by previous studies. After exiting the Fawn Trough, the SPF crossed Chun Spur and continued as a strong north-westward flowing jet along the eastern flank of the Kerguelen Plateau before turning offshore between 50°S and 51.5°S. Measured bottom water temperatures suggest a deep water connection between the northern and southern parts of the eastern Kerguelen Plateau indicating that the deep western boundary current continues at least as far north as 50.5°S. Analysis of satellite altimetry derived SSH streamlines demonstrates a southward shift of both the northern and southern branches of the Polar Front from 1994 to 2004. In the direct vicinity of the Heard and McDonald islands, cool waters of southern origin flow along the Heard Island slope and through the Eastern Trough bringing cold Winter Water (WW) onto the plateau. Complex topography funnels flow through canyons, deepens the mixed layer and increases productivity, resulting in this area being the preferred foraging region for a number of satellite-tracked land-based predators.

  9. Using a Sound Field to Reduce the Risks of Bird-Strike: An Experimental Approach.

    Science.gov (United States)

    Swaddle, John P; Ingrassia, Nicole M

    2017-07-01

    Each year, billions of birds collide with large human-made structures, such as building, towers, and turbines, causing substantial mortality. Such bird-strike, which is projected to increase, poses risks to populations of birds and causes significant economic costs to many industries. Mitigation technologies have been deployed in an attempt to reduce bird-strike, but have been met with limited success. One reason for bird-strike may be that birds fail to pay adequate attention to the space directly in front of them when in level, cruising flight. A warning signal projected in front of a potential strike surface might attract visual attention and reduce the risks of collision. We tested this idea in captive zebra finches (Taeniopygia guttata) that were trained to fly down a long corridor and through an open wooden frame. Once birds were trained, they each experienced three treatments at unpredictable times and in a randomized order: a loud sound field projected immediately in front of the open wooden frame; a mist net (i.e., a benign strike surface) placed inside the wooden frame; and both the loud sound and the mist net. We found that birds slowed their flight approximately 20% more when the sound field was projected in front of the mist net compared with when the mist net was presented alone. This reduction in velocity would equate to a substantial reduction in the force of any collision. In addition to slowing down, birds increased the angle of attack of their body and tail, potentially allowing for more maneuverable flight. Concomitantly, the only cases where birds avoided the mist net occurred in the sound-augmented treatment. Interestingly, the sound field by itself did not demonstrably alter flight. Although our study was conducted in a limited setting, the alterations of flight associated with our sound field has implications for reducing bird-strike in nature and we encourage researchers to test our ideas in field trials. © The Author 2017. Published by

  10. Low-cost compact ECG with graphic LCD and phonocardiogram system design.

    Science.gov (United States)

    Kara, Sadik; Kemaloğlu, Semra; Kirbaş, Samil

    2006-06-01

    Till today, many different ECG devices are made in developing countries. In this study, low cost, small size, portable LCD screen ECG device, and phonocardiograph were designed. With designed system, heart sounds that take synchronously with ECG signal are heard as sensitive. Improved system consist three units; Unit 1, ECG circuit, filter and amplifier structure. Unit 2, heart sound acquisition circuit. Unit 3, microcontroller, graphic LCD and ECG signal sending unit to computer. Our system can be used easily in different departments of the hospital, health institution and clinics, village clinic and also in houses because of its small size structure and other benefits. In this way, it is possible that to see ECG signal and hear heart sounds as synchronously and sensitively. In conclusion, heart sounds are heard on the part of both doctor and patient because sounds are given to environment with a tiny speaker. Thus, the patient knows and hears heart sounds him/herself and is acquainted by doctor about healthy condition.

  11. The sounds of a murder

    Science.gov (United States)

    Peppin, Richard J.

    2003-10-01

    Often engineers and lawyers cannot communicate, in spite of repeated attempts. The lawyer has an idea and wants the engineer to prove it in front of a jury. As examples: a quiet, or briefly loud source must be shown to cause hearing damage, or a construction project in a backyard must be shown to be nonannoying. Often it is a no brainer, either way. But the testimony must be given! In this paper, I discuss a sad case. A young woman and her baby daughter were murdered. A witness claimed she heard something in the dead of night. If so, it was further evidence of guilt of the accused. If not, it was evidence of the lack of credibility of the witness and helped show innocence. I present the results of a forensic investigation of a very brutal murder based on acoustics of the victims' screams, the structure housing the murder, and the witness. The results of the investigation attempted to help the case.

  12. Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance

    Science.gov (United States)

    Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.

    2011-01-01

    We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…

  13. Analysis of failure of voice production by a sound-producing voice prosthesis

    NARCIS (Netherlands)

    van der Torn, M.; van Gogh, C.D.L.; Verdonck-de Leeuw, I M; Festen, J.M.; Mahieu, H.F.

    OBJECTIVE: To analyse the cause of failing voice production by a sound-producing voice prosthesis (SPVP). METHODS: The functioning of a prototype SPVP is described in a female laryngectomee before and after its sound-producing mechanism was impeded by tracheal phlegm. This assessment included:

  14. Open access and the humanities contexts, controversies and the future

    CERN Document Server

    Eve, Martin Paul

    2014-01-01

    If you work in a university, you are almost certain to have heard the term 'open access' in the past couple of years. You may also have heard either that it is the utopian answer to all the problems of research dissemination or perhaps that it marks the beginning of an apocalyptic new era of 'pay-to-say' publishing. In this book, Martin Paul Eve sets out the histories, contexts and controversies for open access, specifically in the humanities. Broaching practical elements alongside economic histories, open licensing, monographs and funder policies, this book is a must-read for both those new to ideas about open-access scholarly communications and those with an already keen interest in the latest developments for the humanities.

  15. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  16. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  17. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  18. Testing Cosmology with Cosmic Sound Waves

    CERN Document Server

    Corasaniti, Pier Stefano

    2008-01-01

    WMAP observations have accurately determined the position of the first two peaks and dips in the CMB temperature power spectrum. These encode information on the ratio of the distance to the last scattering surface to the sound horizon at decoupling. However pre-recombination processes can contaminate this distance information. In order to assess the amplitude of these effects we use the WMAP data and evaluate the relative differences of the CMB peaks and dips multipoles. We find that the position of the first peak is largely displaced with the respect to the expected position of the sound horizon scale at decoupling. In contrast the relative spacings of the higher extrema are statistically consistent with those expected from perfect harmonic oscillations. This provides evidence for a scale dependent phase shift of the CMB oscillations which is caused by gravitational driving forces affecting the propagation of sound waves before recombination. By accounting for these effects we have performed a MCMC likelihoo...

  19. Sonosemantics

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2013-01-01

    The purpose of this positioning paper is to propose a new definitional framework of sound, sonosemantics. The need for this is apparent for a number of reasons. Physiological conditions such as some forms of subjective tinnitus require no sound waves for sound to be heard yet all current mainstream...

  20. Spatial avoidance to experimental increase of intermittent and continuous sound in two captive harbour porpoises.

    Science.gov (United States)

    Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans

    2018-02-01

    The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Sound absorption study on acoustic panel from kapok fiber and egg tray

    Science.gov (United States)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  2. Seafloor environments in the Long Island Sound estuarine system

    Science.gov (United States)

    Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.

    1999-01-01

    Four categories of modern seafloor sedimentary environments have been identified and mapped across the large, glaciated, topographically complex Long Island Sound estuary by means of an extensive regional set of sidescan sonographs, bottom samples, and video-camera observations and supplemental marine-geologic and modeled physical-oceanographic data. (1) Environments of erosion or nondeposition contain sediments which range from boulder fields to gravelly coarse-to-medium sands and appear on the sonographs either as patterns with isolated reflections (caused by outcrops of glacial drift and bedrock) or as patterns of strong backscatter (caused by coarse lag deposits). Areas of erosion or nondeposition were found across the rugged seafloor at the eastern entrance of the Sound and atop bathymetric highs and within constricted depressions in other parts of the basin. (2) Environments of bedload transport contain mostly coarse-to-fine sand with only small amounts of mud and are depicted by sonograph patterns of sand ribbons and sand waves. Areas of bedload transport were found primarily in the eastern Sound where bottom currents have sculptured the surface of a Holocene marine delta and are moving these sediments toward the WSW into the estuary. (3) Environments of sediment sorting and reworking comprise variable amounts of fine sand and mud and are characterized either by patterns of moderate backscatter or by patterns with patches of moderate-to-weak backscatter that reflect a combination of erosion and deposition. Areas of sediment sorting and reworking were found around the periphery of the zone of bedload transport in the eastern Sound and along the southern nearshore margin. They also are located atop low knolls, on the flanks of shoal complexes, and within segments of the axial depression in the western Sound. (4) Environments of deposition are blanketed by muds and muddy fine sands that produce patterns of uniformly weak backscatter. Depositional areas occupy

  3. Sex-Biased Sound Symbolism in English-Language First Names

    Science.gov (United States)

    Pitcher, Benjamin J.; Mesoudi, Alex; McElligott, Alan G.

    2013-01-01

    Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. “Thomas”), while female names are significantly more likely to contain smaller phonemes (e.g. “Emily”). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism. PMID:23755148

  4. A Two-Level Sound Classification Platform for Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Stelios A. Mitilineos

    2018-01-01

    Full Text Available STORM is an ongoing European research project that aims at developing an integrated platform for monitoring, protecting, and managing cultural heritage sites through technical and organizational innovation. Part of the scheduled preventive actions for the protection of cultural heritage is the development of wireless acoustic sensor networks (WASNs that will be used for assessing the impact of human-generated activities as well as for monitoring potentially hazardous environmental phenomena. Collected sound samples will be forwarded to a central server where they will be automatically classified in a hierarchical manner; anthropogenic and environmental activity will be monitored, and stakeholders will be alarmed in the case of potential malevolent behavior or natural phenomena like excess rainfall, fire, gale, high tides, and waves. Herein, we present an integrated platform that includes sound sample denoising using wavelets, feature extraction from sound samples, Gaussian mixture modeling of these features, and a powerful two-layer neural network for automatic classification. We contribute to previous work by extending the proposed classification platform to perform low-level classification too, i.e., classify sounds to further subclasses that include airplane, car, and pistol sounds for the anthropogenic sound class; bird, dog, and snake sounds for the biophysical sound class; and fire, waterfall, and gale for the geophysical sound class. Classification results exhibit outstanding classification accuracy in both high-level and low-level classification thus demonstrating the feasibility of the proposed approach.

  5. Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes.

    Science.gov (United States)

    Salomons, Etto L; Havinga, Paul J M; van Leeuwen, Henk

    2016-09-27

    A wireless sensor network that consists of nodes with a sound sensor can be used to obtain context awareness in home environments. However, the limited processing power of wireless nodes offers a challenge when extracting features from the signal, and subsequently, classifying the source. Although multiple papers can be found on different methods of sound classification, none of these are aimed at limited hardware or take the efficiency of the algorithms into account. In this paper, we compare and evaluate several classification methods on a real sensor platform using different feature types and classifiers, in order to find an approach that results in a good classifier that can run on limited hardware. To be as realistic as possible, we trained our classifiers using sound waves from many different sources. We conclude that despite the fact that the classifiers are often of low quality due to the highly restricted hardware resources, sufficient performance can be achieved when (1) the window length for our classifiers is increased, and (2) if we apply a two-step approach that uses a refined classification after a global classification has been performed.

  6. Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes

    Directory of Open Access Journals (Sweden)

    Etto L. Salomons

    2016-09-01

    Full Text Available A wireless sensor network that consists of nodes with a sound sensor can be used to obtain context awareness in home environments. However, the limited processing power of wireless nodes offers a challenge when extracting features from the signal, and subsequently, classifying the source. Although multiple papers can be found on different methods of sound classification, none of these are aimed at limited hardware or take the efficiency of the algorithms into account. In this paper, we compare and evaluate several classification methods on a real sensor platform using different feature types and classifiers, in order to find an approach that results in a good classifier that can run on limited hardware. To be as realistic as possible, we trained our classifiers using sound waves from many different sources. We conclude that despite the fact that the classifiers are often of low quality due to the highly restricted hardware resources, sufficient performance can be achieved when (1 the window length for our classifiers is increased, and (2 if we apply a two-step approach that uses a refined classification after a global classification has been performed.

  7. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  8. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  9. [Prediction model of human-caused fire occurrence in the boreal forest of northern China].

    Science.gov (United States)

    Guo, Fu-tao; Su, Zhang-wen; Wang, Guang-yu; Wang, Qiang; Sun, Long; Yang, Ting-ting

    2015-07-01

    The Chinese boreal forest is an important forest resource in China. However, it has been suffering serious disturbances of forest fires, which were caused equally by natural disasters (e.g., lightning) and human activities. The literature on human-caused fires indicates that climate, topography, vegetation, and human infrastructure are significant factors that impact the occurrence and spread of human-caused fires. But the studies on human-caused fires in the boreal forest of northern China are limited and less comprehensive. This paper applied the spatial analysis tools in ArcGIS 10.0 and Logistic regression model to investigate the driving factors of human-caused fires. Our data included the geographic coordinates of human-caused fires, climate factors during year 1974-2009, topographic information, and forest map. The results indicated that distance to railway (x1) and average relative humidity (x2) significantly impacted the occurrence of human-caused fire in the study area. The logistic model for predicting the fire occurrence probability was formulated as P= 1/[11+e-(3.026-0.00011x1-0.047x2)] with an accuracy rate of 80%. The above model was used to predict the monthly fire occurrence during the fire season of 2015 based on the HADCM2 future weather data. The prediction results showed that the high risk of human-caused fire occurrence concentrated in the months of April, May, June and August, while April and May had higher risk of fire occurrence than other months. According to the spatial distribution of possibility of fire occurrence, the high fire risk zones were mainly in the west and southwest of Tahe, where the major railways were located.

  10. Trial-to-Trial Carryover in Auditory Short-Term Memory

    Science.gov (United States)

    Visscher, Kristina M.; Kahana, Michael J.; Sekuler, Robert

    2009-01-01

    Using a short-term recognition memory task, the authors evaluated the carryover across trials of 2 types of auditory information: the characteristics of individual study sounds (item information) and the relationships between the study sounds (study set homogeneity). On each trial, subjects heard 2 successive broadband study sounds and then…

  11. Human parechovirus causes encephalitis with white matter injury in Neonates

    NARCIS (Netherlands)

    Verboon-Maciolek, Malgorzata A.; Groenendaal, Floris; Hahn, Cecil D.; Hellmann, Jonathan; van Loon, Anton M.; Boivin, Guy; de Vries, Linda S.

    Objective: To assess the role of human parechoviruses (HPeVs) as a cause of neonatal cerebral infection and to report neuroimaging findings of newborn infants with encephalitis caused by HPeVs. Methods: Clinical presentation, cranial ultrasonography, magnetic resonance imaging (MRI) findings, and

  12. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  13. Numerical Model of the Human Cardiovascular System-Korotkoff Sounds Simulation

    Czech Academy of Sciences Publication Activity Database

    Maršík, František; Převorovská, Světlana; Brož, Z.; Štembera, V.

    Vol.4, č. 2 (2004), s. 193-199 ISSN 1432-9077 R&D Projects: GA ČR GA106/03/1073 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiovascular system * Korotkoff sounds * numerical simulation Subject RIV: BK - Fluid Dynamics

  14. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    OpenAIRE

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation ...

  15. Loss of urban forest canopy and the related effects on soundscape and human directed attention

    Science.gov (United States)

    Laverne, Robert James Paul

    The specific questions addressed in this research are: Will the loss of trees in residential neighborhoods result in a change to the local soundscape? The investigation of this question leads to a related inquiry: Do the sounds of the environment in which a person is present affect their directed attention?. An invasive insect pest, the Emerald Ash Borer (Agrilus planipennis ), is killing millions of ash trees (genus Fraxinus) throughout North America. As the loss of tree canopy occurs, urban ecosystems change (including higher summer temperatures, more stormwater runoff, and poorer air quality) causing associated changes to human physical and mental health. Previous studies suggest that conditions in urban environments can result in chronic stress in humans and fatigue to directed attention, which is the ability to focus on tasks and to pay attention. Access to nature in cities can help refresh directed attention. The sights and sounds associated with parks, open spaces, and trees can serve as beneficial counterbalances to the irritating conditions associated with cities. This research examines changes to the quantity and quality of sounds in Arlington Heights, Illinois. A series of before-and-after sound recordings were gathered as trees died and were removed between 2013 and 2015. Comparison of recordings using the Raven sound analysis program revealed significant differences in some, but not all measures of sound attributes as tree canopy decreased. In general, more human-produced mechanical sounds (anthrophony) and fewer sounds associated with weather (geophony) were detected. Changes in sounds associated with animals (biophony) varied seasonally. Monitoring changes in the proportions of anthrophony, biophony and geophony can provide insight into changes in biodiversity, environmental health, and quality of life for humans. Before-tree-removal and after-tree-removal sound recordings served as the independent variable for randomly-assigned human volunteers as

  16. Prediction model for sound transmission from machinery in buildings: feasible approaches and problems to be solved

    NARCIS (Netherlands)

    Gerretsen, E.

    2000-01-01

    Prediction models for the airborne and impact sound transmission in buildings have recently been established (EN 12354- 1&2:1999). However, these models do not cover technical installations and machinery as a source of sound in buildings. Yet these can cause unacceptable sound levels and it is

  17. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  18. People's experiences of noise from wind power plants; Maenniskors upplevelser av ljud fraan vindkraftverk

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Eja; Persson Waye, Kerstin (Goeteborg Univ., Sahlgrenska Academy, Goeteborg (Sweden). Inst. of Medicine. Dept. of Public Health and Community Medicine); Forssen, Jens (Chalmers Univ. of Technology, Goeteborg (Sweden). Applied Acoustics)

    2009-04-15

    The erection of wind turbines is preceded by an Environmental Impact Assessment which involves an estimation of the impact of wind turbines on people living nearby. One impact to be assessed is sound. It is important to generate scientifically based knowledge in order to describe how the sound will be perceived in order to ensure that the sound from wind turbines will not have an adverse health effect on the residents in the area. The objective of a joint analysis of the results from two field studies was to show the relationship between sound levels from wind turbines at the dwelling and the perception of the sound, as well as to describe factors influencing this relationship. The objective of a diary study, where the participants reported how often they were home and if so, if they were outdoors, was to describe how often the sound from wind turbines is heard and at which meteorological conditions. The results of long term sound measurements were compared with calculated values applying different models in order to study the accuracy of the sound propagation model used today. Another aim was to see if variations of meteorological factors influenced the sound propagation to such a degree that they should be included in the calculations of sound levels. The joint analyses of the two studies of annoyance confirm and strengthen previous reported data. The proportion that notices wind turbine sound as well as the proportion that were annoyed by the sound increased with increasing sound levels. The probability to be annoyed by the noise was larger if the turbines were visible from the dwelling and for people living in an agricultural landscape, whereas differences in terrain had no impact. The only association between sound level and health related variables other than annoyance was that of being disturbed in the sleep. Participants in the diary study more often reported that they could hear sound from the wind turbines when the electrical power increased, i.e. the

  19. Effects of lung elasticity on the sound propagation in the lung

    International Nuclear Information System (INIS)

    Yoneda, Takahiro; Wada, Shigeo; Nakamura, Masanori; Horii, Noriaki; Mizushima, Koichiro

    2011-01-01

    Sound propagation in the lung was simulated for gaining insight into its acoustic properties. A thorax model consisting of lung parenchyma, thoracic bones, trachea and other tissues was made from human CT images. Acoustic nature of the lung parenchyma and bones was expressed with the Biot model of poroelastic material, whereas trachea and tissues were modeled with gas and an elastic material. A point sound source of white noises was placed in the first bifurcation of trachea. The sound propagation in the thorax model was simulated in a frequency domain. The results demonstrated the significant attenuation of sound especially in frequencies larger than 1,000 Hz. Simulations with a stiffened lung demonstrated suppression of the sound attenuation for higher frequencies observed in the normal lung. These results indicate that the normal lung has the nature of a low-pass filter, and stiffening helps the sound at higher frequencies to propagate without attenuations. (author)

  20. 76 FR 48179 - Notice of Inventory Completion: Slater Museum of Natural History, University of Puget Sound...

    Science.gov (United States)

    2011-08-08

    ... Museum of Natural History, University of Puget Sound, Tacoma, WA AGENCY: National Park Service, Interior. ACTION: Notice. SUMMARY: The Slater Museum of Natural History, University of Puget Sound has completed an... contact the Slater Museum of Natural History, University of Puget Sound. Disposition of the human remain...

  1. Technology management for environmentally sound and sustainable development

    International Nuclear Information System (INIS)

    Zaidi, S.M.J.

    1992-01-01

    With the evolutionary change in the production activities of human societies, the concept of development has also been changing. In the recent years the emphasis has been on the environmentally sound and sustainable development. The environmentally sound and sustainable development can be obtained through judicious use of technology. Technology as a resource transformer has emerged as the most important factor which can constitute to economic growth. But technology is not an independent and autonomous force, it is only an instrument which needs to be used carefully, properly and appropriately which necessitates technology management. (author)

  2. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  3. In vivo measurement of mechanical properties of human long bone by using sonic sound

    Energy Technology Data Exchange (ETDEWEB)

    Hossain, M. Jayed, E-mail: zed.hossain06@gmail.com; Rahman, M. Moshiur, E-mail: razib-121@yahoo.com; Alam, Morshed [Department of Mechanical Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000 (Bangladesh)

    2016-07-12

    Vibration analysis has evaluated as non-invasive techniques for the in vivo assessment of bone mechanical properties. The relation between the resonant frequencies, long bone geometry and mechanical properties can be obtained by vibration analysis. In vivo measurements were performed on human ulna as a simple beam model with an experimental technique and associated apparatus. The resonant frequency of the ulna was obtained by Fast Fourier Transformation (FFT) analysis of the vibration response of piezoelectric accelerometer. Both elastic modulus and speed of the sound were inferred from the resonant frequency. Measurement error in the improved experimental setup was comparable with the previous work. The in vivo determination of bone elastic response has potential value in screening programs for metabolic bone disease, early detection of osteoporosis and evaluation of skeletal effects of various therapeutic modalities.

  4. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    Science.gov (United States)

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e

  5. Operator performance and annunciation sounds

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.T.; Artiss, W.G.

    1997-01-01

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author)

  6. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  7. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  8. Interactive Sonification of Spontaneous Movement of Children - Cross-modal Mapping and the Perception of Body Movement Qualities through Sound

    Directory of Open Access Journals (Sweden)

    Emma Frid

    2016-11-01

    Full Text Available In this paper we present three studies focusing on the effect of different sound models ininteractive sonification of bodily movement. We hypothesized that a sound model characterizedby continuous smooth sounds would be associated with other movement characteristics thana model characterized by abrupt variation in amplitude and that these associations could bereflected in spontaneous movement characteristics. Three subsequent studies were conductedto investigate the relationship between properties of bodily movement and sound: (1 a motioncapture experiment involving interactive sonification of a group of children spontaneously movingin a room, (2 an experiment involving perceptual ratings of sonified movement data and (3an experiment involving matching between sonified movements and their visualizations in theform of abstract drawings. In (1 we used a system constituting of 17 IR cameras trackingpassive reflective markers. The head positions in the horizontal plane of 3-4 children weresimultaneously tracked and sonified, producing 3-4 sound sources spatially displayed throughan 8-channel loudspeaker system. We analyzed children’s spontaneous movement in termsof energy-, smoothness- and directness index. Despite large inter-participant variability andgroup-specific effects caused by interaction among children when engaging in the spontaneousmovement task, we found a small but significant effect of sound model. Results from (2 indicatethat different sound models can be rated differently on a set of motion-related perceptual scales(e.g. expressivity and fluidity. Also, results imply that audio-only stimuli can evoke strongerperceived properties of movement (e.g. energetic, impulsive than stimuli involving both audioand video representations. Findings in (3 suggest that sounds portraying bodily movementcan be represented using abstract drawings in a meaningful way. We argue that the resultsfrom these studies support the existence of a cross

  9. First human systemic infection caused by Spiroplasma.

    Science.gov (United States)

    Aquilino, Ana; Masiá, Mar; López, Pilar; Galiana, Antonio J; Tovar, Juan; Andrés, María; Gutiérrez, Félix

    2015-02-01

    Spiroplasma species are organisms that normally colonize plants and insects. We describe the first case of human systemic infection caused by Spiroplasma bacteria in a patient with hypogammaglobulinemia undergoing treatment with biological disease-modifying antirheumatic agents. Spiroplasma turonicum was identified through molecular methods in several blood cultures. The infection was successfully treated with doxycycline plus levofloxacin. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  10. Challenges to communicate risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2014-12-01

    The awareness of natural hazards has been up-trending in recent years. In particular, this is true for earthquakes, which increase in frequency and magnitude in regions that normally do not experience seismic activity. In fact, one of the major concerns for many communities and businesses is that humans today seem to cause earthquakes due to large-scale shale gas production, dewatering and flooding of mines and deep geothermal power production. Accordingly, without opposing any of these technologies it should be a priority of earth scientists who are researching natural hazards to communicate earthquake risks. This presentation discusses the challenges that earth scientists are facing to properly communicate earthquake risks, in light of the fact that human-caused earthquakes are an environmental change affecting only some communities and businesses. Communication channels may range from research papers, books and class room lectures to outreach events and programs, popular media events or even social media networks.

  11. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  12. A rare cause of acute flaccid paralysis: Human coronaviruses

    OpenAIRE

    Turgay, Cokyaman; Emine, Tekin; Ozlem, Koken; Muhammet, S. Paksu; Haydar, A. Tasdemir

    2015-01-01

    Acute flaccid paralysis (AFP) is a life-threatening clinical entity characterized by weakness in the whole body muscles often accompanied by respiratory and bulbar paralysis. The most common cause is Gullian-Barre syndrome, but infections, spinal cord diseases, neuromuscular diseases such as myasthenia gravis, drugs and toxins, periodic hypokalemic paralysis, electrolyte disturbances, and botulism should be considered as in the differential diagnosis. Human coronaviruses (HCoVs) cause common ...

  13. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  14. Is Democratization a Sound Strategy for Combating Fundamentalist Islam

    National Research Council Canada - National Science Library

    Johnson, Anthony J

    2008-01-01

    .... This paper examines the premise that "universal human rights", as the basis for democracy, is compatible with Islamic culture and is therefore a sound strategy for combating the spread of "Islamic...

  15. Human error as the root cause of severe accidents at nuclear reactors

    International Nuclear Information System (INIS)

    Kovács Zoltán; Rýdzi, Stanislav

    2017-01-01

    A root cause is a factor inducing an undesirable event. It is feasible for root causes to be eliminated through technological process improvements. Human error was the root cause of all severe accidents at nuclear power plants. The TMI accident was caused by a series of human errors. The Chernobyl disaster occurred after a badly performed test of the turbogenerator at a reactor with design deficiencies, and in addition, the operators ignored the safety principles and disabled the safety systems. At Fukushima the tsunami risk was underestimated and the project failed to consider the specific issues of the site. The paper describes the severe accidents and points out the human errors that caused them. Also, provisions that might have eliminated those severe accidents are suggested. The fact that each severe accident occurred on a different type of reactor is relevant – no severe accident ever occurred twice at the same reactor type. The lessons learnt from the severe accidents and the safety measures implemented on reactor units all over the world seem to be effective. (orig.)

  16. A system for heart sounds classification.

    Directory of Open Access Journals (Sweden)

    Grzegorz Redlarski

    Full Text Available The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases - one of the major causes of death around the globe - a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability.

  17. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  18. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  19. Low frequency sound field control in rectangular listening rooms using CABS (Controlled Acoustic Bass System) will also reduce sound transmission to neighbor rooms

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2011-01-01

    Sound reproduction is often taking place in small and medium sized rectangular rooms. As rectangular rooms have 3 pairs of parallel walls the reflections at especially low frequencies will cause up to 30 dB spatial variations of the sound pressure level in the room. This will take place not only...... at resonance frequencies, but more or less at all frequencies. A time based room correction system named CABS (Controlled Acoustic Bass System) has been developed and is able to create a homogeneous sound field in the whole room at low frequencies by proper placement of multiple loudspeakers. A normal setup...... from the rear wall, and thereby leaving only the plane wave in the room. With a room size of (7.8 x 4.1 x 2.8) m. it is possible to prevent modal frequencies up to 100 Hz. An investigation has shown that the sound transmitted to a neighbour room also will be reduced if CABS is used. The principle...

  20. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  1. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  2. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  3. Validating a perceptual distraction model using a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two...... sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli...

  4. Operator performance and annunciation sounds

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B K; Bradley, M T; Artiss, W G [Human Factors Practical, Dipper Harbour, NB (Canada)

    1998-12-31

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author) 3 refs.

  5. Effect of a sound wave on the stability of an argon discharge

    International Nuclear Information System (INIS)

    Galechyan, G.A.; Karapetyan, D.M.; Tavakalyan, L.B.

    1992-01-01

    The effect of a sound wave on the stability of the positive column of an argon discharge has been studied experimentally in the range of pressures from 40 to 180 torr and discharge currents from 40 to 110 mA in a tube with an interior diameter of 9.8 cm. It is shown that, depending on the intensity of the sound wave and the discharge parameters, sound can cause the positive column either to contract or to leave the contracted state. The electric field strength has been measured as a function of the sound intensity. An analogy between the effect of sound and that of longitudinal pumping of the gas on the argon discharge parameters has been established. The radial temperature of the gas has been studied in an argon discharge as a function of the sound intensity for different gas pressures. A direct relationship has been established between the sign of the detector effect produced by a sound wave in a discharge and the processes of contraction and filamentation of a discharge. 11 refs., 4 figs., 1 tab

  6. Third sound in mixtures of helium-3 and helium-4

    International Nuclear Information System (INIS)

    Downs, J.L.

    1975-01-01

    Third sound (surface wave) velocities have been measured at temperatures of 1.205, 1.400, and 1.601 K in thin adsorbed films of 3 He-- 4 He mixtures of four concentrations. The molar concentrations of the overall mixtures, including both the film and vapor phases, were 20.254 percent, 39.907 percent, 64.968 percent, and 84.686 percent. The results of these measurements are generally consistent with a new theory of third sound in mixtures, in which the changes in velocity from that in the case of pure 4 He are shown to result from two factors. A decrease in the superfluid density in the mixture, which is enhanced by an increase in the superfluid healing length, tends to cause a reduction in the velocity, which is sometimes dominant for very thin films. An increase in the restoring force resulting from osmotic pressure in the mixture (in addition to Van der Waals forces) causes an increase in the velocity, which is dominant for thicker films. Other characteristics of third sound in mixtures are an increase in the onset thickness and an increase in the attenuation from those observed in pure 4 He. New measurements of third sound velocities in films of pure 4 He have also been made, with emphasis on very thin films near the onset thickness. The onset of third sound was seen to occur at less than the maximum velocity, and dispersion has been observed in very thin films which is qualitatively in agreement with theory

  7. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  8. Kelp and Eelgrass in Puget Sound

    Science.gov (United States)

    2007-05-01

    as sea lettuce (Ulva spp.) will overgrow eelgrass. Excessive nutrients also can cause over- growth by epiphytes on the blades, blocking light...Kelp and Eelgrass in Puget Sound Ackerman, J.D. 1997. Submarine pollination in the marine...Florida. 531 p. Cox, P.A. 1988. Hydophilous pollination . Annual Review in of Ecology and Systematics 19:261-280. Dayton, P.K. 1985. Ecology of kelp

  9. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  10. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  11. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  12. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  13. Airborne and impact sound transmission in super-light structures

    DEFF Research Database (Denmark)

    Christensen, Jacob Ellehauge; Hertz, Kristian Dahl; Brunskog, Jonas

    2011-01-01

    -aggregate concrete. A super-light deck element is developed. It is intended to be lighter than traditional deck structures without compromising the acoustic performance. It is primarily the airborne sound insulation, which is of interest as the requirements for the impact sound insulation to a higher degree can...... be fulfilled by external means such as floorings. The acoustical performance of the slab element is enhanced by several factors. Load carrying internal arches stiffens the element. This causes a decrease in the modal density, which is further improved by the element being lighter. These parameters also...

  14. Application of grey incidence analysis to connection between human errors and root cause

    International Nuclear Information System (INIS)

    Ren Yinxiang; Yu Ren; Zhou Gang; Chen Dengke

    2008-01-01

    By introducing grey incidence analysis, the relatively important impact of root cause upon human errors was researched in the paper. On the basis of WANO statistic data and grey incidence analysis, lack of alternate examine, bad basic operation, short of theoretical knowledge, relaxation of organization and management and deficiency of regulations are the important influence of root cause on human err ors. Finally, the question to reduce human errors was discussed. (authors)

  15. Rainforests as concert halls for birds: Are reverberations improving sound transmission of long song elements?

    DEFF Research Database (Denmark)

    Nemeth, Erwin; Dabelsteen, Torben; Pedersen, Simon Boel

    2006-01-01

    that longer sounds are less attenuated. The results indicate that higher sound pressure level is caused by superimposing reflections. It is suggested that this beneficial effect of reverberations explains interspecific birdsong differences in element length. Transmission paths with stronger reverberations......In forests reverberations have probably detrimental and beneficial effects on avian communication. They constrain signal discrimination by masking fast repetitive sounds and they improve signal detection by elongating sounds. This ambivalence of reflections for animal signals in forests is similar...... to the influence of reverberations on speech or music in indoor sound transmission. Since comparisons of sound fields of forests and concert halls have demonstrated that reflections can contribute in both environments a considerable part to the energy of a received sound, it is here assumed that reverberations...

  16. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  17. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  18. Male infertility and its causes in human.

    Science.gov (United States)

    Miyamoto, Toshinobu; Tsujimura, Akira; Miyagawa, Yasushi; Koh, Eitetsu; Namiki, Mikio; Sengoku, Kazuo

    2012-01-01

    Infertility is one of the most serious social problems facing advanced nations. In general, approximate half of all cases of infertility are caused by factors related to the male partner. To date, various treatments have been developed for male infertility and are steadily producing results. However, there is no effective treatment for patients with nonobstructive azoospermia, in which there is an absence of mature sperm in the testes. Although evidence suggests that many patients with male infertility have a genetic predisposition to the condition, the cause has not been elucidated in the vast majority of cases. This paper discusses the environmental factors considered likely to be involved in male infertility and the genes that have been clearly shown to be involved in male infertility in humans, including our recent findings.

  19. Male Infertility and Its Causes in Human

    Directory of Open Access Journals (Sweden)

    Toshinobu Miyamoto

    2012-01-01

    Full Text Available Infertility is one of the most serious social problems facing advanced nations. In general, approximate half of all cases of infertility are caused by factors related to the male partner. To date, various treatments have been developed for male infertility and are steadily producing results. However, there is no effective treatment for patients with nonobstructive azoospermia, in which there is an absence of mature sperm in the testes. Although evidence suggests that many patients with male infertility have a genetic predisposition to the condition, the cause has not been elucidated in the vast majority of cases. This paper discusses the environmental factors considered likely to be involved in male infertility and the genes that have been clearly shown to be involved in male infertility in humans, including our recent findings.

  20. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans.

    Science.gov (United States)

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.

  1. Characteristics and prediction of sound level in extra-large spaces

    OpenAIRE

    Wang, C.; Ma, H.; Wu, Y.; Kang, J.

    2018-01-01

    This paper aims to examine sound fields in extra-large spaces, which are defined in this paper as spaces used by people, with a volume approximately larger than 125,000m 3 and absorption coefficient less than 0.7. In such spaces inhomogeneous reverberant energy caused by uneven early reflections with increasing volume has a significant effect on sound fields. Measurements were conducted in four spaces to examine the attenuation of the total and reverberant energy with increasing source-receiv...

  2. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  3. [Analysis of the heart sound with arrhythmia based on nonlinear chaos theory].

    Science.gov (United States)

    Ding, Xiaorong; Guo, Xingming; Zhong, Lisha; Xiao, Shouzhong

    2012-10-01

    In this paper, a new method based on the nonlinear chaos theory was proposed to study the arrhythmia with the combination of the correlation dimension and largest Lyapunov exponent, through computing and analyzing these two parameters of 30 cases normal heart sound and 30 cases with arrhythmia. The results showed that the two parameters of the heart sounds with arrhythmia were higher than those with the normal, and there was significant difference between these two kinds of heart sounds. That is probably due to the irregularity of the arrhythmia which causes the decrease of predictability, and it's more complex than the normal heart sound. Therefore, the correlation dimension and the largest Lyapunov exponent can be used to analyze the arrhythmia and for its feature extraction.

  4. Spatial filtering of audible sound with acoustic landscapes

    Science.gov (United States)

    Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun; Cheng, Jianchun

    2017-07-01

    Acoustic metasurfaces manipulate waves with specially designed structures and achieve properties that natural materials cannot offer. Similar surfaces work in audio frequency range as well and lead to marvelous acoustic phenomena that can be perceived by human ears. Being intrigued by the famous Maoshan Bugle phenomenon, we investigate large scale metasurfaces consisting of periodic steps of sizes comparable to the wavelength of audio frequency in both time and space domains. We propose a theoretical method to calculate the scattered sound field and find that periodic corrugated surfaces work as spatial filters and the frequency selective character can only be observed at the same side as the incident wave. The Maoshan Bugle phenomenon can be well explained with the method. Finally, we demonstrate that the proposed method can be used to design acoustical landscapes, which transform impulsive sound into famous trumpet solos or other melodious sound.

  5. Awareness of human papillomavirus among women attending a well woman clinic.

    Science.gov (United States)

    Waller, J; McCaffery, K; Forrest, S; Szarewski, A; Cadman, L; Wardle, J

    2003-08-01

    To assess the level and accuracy of public understanding of human papillomavirus (HPV) in the United Kingdom. Women attending a well woman clinic were asked to complete a questionnaire assessing HPV awareness and specific knowledge about the virus. Questionnaires were completed by 1032 women, of whom 30% had heard of HPV. Older women, non-smokers, and those with a history of candida, genital warts, or an abnormal smear result were more likely to have heard of HPV. Even among those who had heard of HPV, knowledge was generally poor, and fewer than half were aware of the link with cervical cancer. There was also confusion about whether condoms or oral contraceptives could protect against HPV infection. In this relatively well educated sample, awareness and knowledge of HPV were poor. Public education is urgently needed so that women participating in cervical cancer screening are fully informed about the meaning of their results, especially if HPV testing is soon to be introduced.

  6. The science and politics of human well-being: a case study in cocreating indicators for Puget Sound restoration

    Directory of Open Access Journals (Sweden)

    Kelly Biedenweg

    2017-09-01

    Full Text Available Across scientific fields, there have been calls to improve the integration of scientific knowledge in policy making. Particularly since the publication of the Millennium Ecosystem Assessment, these calls increasingly refer to data on human well-being related to the natural environment. However, policy decisions involve selective uptake of information across communities with different preferences and decision-making processes. Additionally, researchers face the fact that there are important trade-offs in producing knowledge that is simultaneously credible, legitimate, socially relevant, and socially just. We present a study that developed human well-being indicators for Washington State's Puget Sound ecosystem recovery agency over 3 years. Stakeholders, decision makers, and social scientists were engaged in the identification, modification, and prioritization of well-being indicators that were adopted by the agency for tracking progress toward ecosystem recovery and strategic planning. After substantial literature review, interviews, workshops, and indicator ranking exercises, 15 indicators were broadly accepted and important to all audiences. Although the scientists, decision makers, and stakeholders used different criteria to identify and prioritize indicators, they all agreed that indicators associated with each of 6 broad domains (social, cultural, psychological, physical, economic, and governance were critical to assess the holistic concept of well-being related to ecosystem restoration. Decision makers preferred indicators that mirrored stakeholder preferences, whereas social scientists preferred only a subset. The Puget Sound indicator development process provides an example for identifying, selecting, and monitoring diverse concepts of well-being related to environmental restoration in a way that promotes recognition, participation, and a fair distribution of environmental benefits across the region.

  7. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    Science.gov (United States)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  8. Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.

    Science.gov (United States)

    Rauterberg, M.; And Others

    Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…

  9. Low frequency sound field enhancement system for rectangular rooms, using multiple loudspeakers

    DEFF Research Database (Denmark)

    Celestinos, Adrian

    2007-01-01

    The scope of this PhD dissertation is within the performance of loudspeakers in rooms at low frequencies. The research concentrates on the improvement of the sound level distribution in rooms produced by loudspeakers at low frequencies. The work focuses on seeing the problem acoustically...... and solving it in the time domain. Loudspeakers are the last link in the sound reproduction chain, and they are typically placed in small or medium size rooms. When low frequency sound is radiated by a loudspeaker the sound level distribution along the room presents large deviations. This is due...... to the multiple reflection of sound at the rigid walls of the room. This may cause level differences of up to 20 dB in the room. Some of these deviations are associated with the standing waves, resonances or anti resonances of the room. The understanding of the problem is accomplished by analyzing the behavior...

  10. Mixing on the Heard Island Plateau during HEOBI

    Science.gov (United States)

    Robertson, R.

    2016-12-01

    On the plateau near Heard and McDonald Islands, the water column was nearly always well mixed. Typically, temperature differences between the surface and the bottom, 100-200 m, were less than 0.2oC and often less that 0.1oC. Surface stratification developed through insolation and deep primarily through a combination of upwelling from canyons and over the edge of the plateau and tidal advection. This stratification was primarily removed by a combination of wind and tidal mixing. Persistent winds of 30 knots mixed the upper 20-50 m. Strong wind events, 40-60 knots, mixed the water column to 100-200 m depth, which over the plateau, was often the entire water column. Benthic tidal friction mixed the bottom 30-50 m. Although the water column was unstratified at the two plume sites intensively investigated, tidal velocities were baroclinic, probably due to topographic controls. Tidal advection changed the bottom temperatures by 0.5oC within 8 hours, more than doubling the prior stratification. Wind mixing quickly homogenized the water column, resulting in the surface often showing the deeper upwelling and advective events. Although acoustic plumes with bubbles were observed in the water column, there was no evidence of geothermal vents or geothermal influence on temperatures. Mixing by bubbles rising in the water column was indistinguishable from the wind and tidal mixing, although the strongest upward vertical velocities were observed at the sites of these acoustic/bubble plumes.

  11. A further test of relevance of ASEL and CSEL in the determination of the rating sound level for shooting sounds

    NARCIS (Netherlands)

    Vos, J.

    1998-01-01

    In a previous study on the annoyance caused by shooting sounds [Proceedings Internoise '96, Vol. 5, 2231-2236], it was shown that an almost perfect prediction of the annoyance, as rated indoors with the windows closed, was obtained on the basis of the weighted sum of the outdoor A-weighted and

  12. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  13. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  14. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  15. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  16. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  17. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  18. Awareness of Cervical Cancer Causes and Pre-determinants of Likelihood to Screen among Women in Haiti

    Science.gov (United States)

    McCarthy, Schatzi H.; Walmer, Kathy A.; Boggan, Joel C.; Gichane, Margaret W.; Calo, William A.; Beauvais, Harry A.; Brewer, Noel T.

    2017-01-01

    Objectives Cervical cancer is the leading cause of cancer deaths among women in Haiti. Given this high disease burden, we sought to better understand women’s knowledge of its causes and the socio-demographic and health correlates of cervical cancer screening. Methods Participants were 410 adult women presenting at clinics in Léogâne and Port-au-Prince, Haiti. We used bivariate and multivariate logic regression to identify correlates of Pap smear receipt. Results Only 29% of respondents had heard of human papillomavirus (HPV), while 98% were aware of cervical cancer. Of those aware of cervical cancer, 12% believed sexually transmitted infections (STIs) cause it, and only 4% identified HPV infection as the cause. Women with a previous STI were more likely to have had Pap smear (34% vs. 71%, OR=3.45; 95% CI: 1.57–7.59). Screening was also more likely among women who were older than age 39, better educated and employed (all p<.05). Almost all women (97%) were willing to undergo cervical cancer screening. Conclusions This sample of Haitian women had limited awareness of HPV and cervical cancer causes; but when provided with health information, they saw the benefits of cancer screening. Future initiatives should provide health education messages, with efforts targeting young and at-risk women. PMID:27906806

  19. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  20. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  1. Adaptive sound speed correction for abdominal ultrasonography: preliminary results

    Science.gov (United States)

    Jin, Sungmin; Kang, Jeeun; Song, Tai-Kyung; Yoo, Yangmo

    2013-03-01

    Ultrasonography has been conducting a critical role in assessing abdominal disorders due to its noninvasive, real-time, low cost, and deep penetrating capabilities. However, for imaging obese patients with a thick fat layer, it is challenging to achieve appropriate image quality with a conventional beamforming (CON) method due to phase aberration caused by the difference between sound speeds (e.g., 1580 and 1450m/s for liver and fat, respectively). For this, various sound speed correction (SSC) methods that estimate the accumulated sound speed for a region-of interest (ROI) have been previously proposed. However, with the SSC methods, the improvement in image quality was limited only for a specific depth of ROI. In this paper, we present the adaptive sound speed correction (ASSC) method, which can enhance the image quality for whole depths by using estimated sound speeds from two different depths in the lower layer. Since these accumulated sound speeds contain the respective contributions of layers, an optimal sound speed for each depth can be estimated by solving contribution equations. To evaluate the proposed method, the phantom study was conducted with pre-beamformed radio-frequency (RF) data acquired with a SonixTouch research package (Ultrasonix Corp., Canada) with linear and convex probes from the gel pad-stacked tissue mimicking phantom (Parker Lab. Inc., USA and Model539, ATS, USA) whose sound speeds are 1610 and 1450m/s, respectively. From the study, compared to the CON and SSC methods, the ASSC method showed the improved spatial resolution and information entropy contrast (IEC) for convex and linear array transducers, respectively. These results indicate that the ASSC method can be applied for enhancing image quality when imaging obese patients in abdominal ultrasonography.

  2. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  3. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  4. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  5. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  6. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  7. The influence of neonatal intensive care unit design on sound level.

    Science.gov (United States)

    Chen, Hsin-Li; Chen, Chao-Huei; Wu, Chih-Chao; Huang, Hsiu-Jung; Wang, Teh-Ming; Hsu, Chia-Chi

    2009-12-01

    Excessive noise in nurseries has been found to cause adverse effects in infants, especially preterm infants in neonatal intensive care units (NICUs). The NICU design may influence the background sound level. We compared the sound level in two differently designed spaces in one NICU. We hypothesized that the sound level in an enclosed space would be quieter than in an open space. Sound levels were measured continuously 24 hours a day in two separate spaces at the same time, one enclosed and one open. Sound-level meters were placed near beds in each room. Sound levels were expressed as decibels, A-weighted (dBA) and presented as hourly L(eq), L(max), L(10), and L(90). The hourly L(eq) in the open space (50.8-57.2dB) was greater than that of the enclosed space (45.9-51.7dB), with a difference of 0.4-10.4dB, and a mean difference of 4.5dB (p<0.0001). The hourly L(10), L(90), and L(max) in the open space also exceeded that in the enclosed space (p<0.0001). The sound level measured in the enclosed space was quieter than in the open space. The design of bed space should be taken into consideration when building a new NICU. Besides the design of NICU architecture, continuous monitoring of sound level in the NICU is important to maintain a quiet environment.

  8. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  9. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  10. Sounding the Alert: Designing an Effective Voice for Earthquake Early Warning

    Science.gov (United States)

    Burkett, E. R.; Given, D. D.

    2015-12-01

    The USGS is working with partners to develop the ShakeAlert Earthquake Early Warning (EEW) system (http://pubs.usgs.gov/fs/2014/3083/) to protect life and property along the U.S. West Coast, where the highest national seismic hazard is concentrated. EEW sends an alert that shaking from an earthquake is on its way (in seconds to tens of seconds) to allow recipients or automated systems to take appropriate actions at their location to protect themselves and/or sensitive equipment. ShakeAlert is transitioning toward a production prototype phase in which test users might begin testing applications of the technology. While a subset of uses will be automated (e.g., opening fire house doors), other applications will alert individuals by radio or cellphone notifications and require behavioral decisions to protect themselves (e.g., "Drop, Cover, Hold On"). The project needs to select and move forward with a consistent alert sound to be widely and quickly recognized as an earthquake alert. In this study we combine EEW science and capabilities with an understanding of human behavior from the social and psychological sciences to provide insight toward the design of effective sounds to help best motivate proper action by alert recipients. We present a review of existing research and literature, compiled as considerations and recommendations for alert sound characteristics optimized for EEW. We do not yet address wording of an audible message about the earthquake (e.g., intensity and timing until arrival of shaking or possible actions), although it will be a future component to accompany the sound. We consider pitch(es), loudness, rhythm, tempo, duration, and harmony. Important behavioral responses to sound to take into account include that people respond to discordant sounds with anxiety, can be calmed by harmony and softness, and are innately alerted by loud and abrupt sounds, although levels high enough to be auditory stressors can negatively impact human judgment.

  11. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  12. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  13. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  14. Equality, Human Dignity and Minorities: A Social Democracy in Construction

    Directory of Open Access Journals (Sweden)

    Jacson Gross

    2015-12-01

    Full Text Available This article deals with equality, human dignity and the need to build a social democracy. Bringing equality concepts in a broad sense as foundation work, is set to develop writing by making some remarks on the dignity of the individual and minorities, often not heard, even within democratic scenarios since democracy is the voice the most by hiding voices and demands of sectors or groups excluded from the agenda. Minorities such as LGBT, black, residents of peripheral areas of large cities, among others, do not have their demands heard from this idea, we seek a concept of social democracy, which is wider than just the voice of the majority.

  15. Calculation and reduction of the sound emissions of overhead power lines

    International Nuclear Information System (INIS)

    Straumann, U.

    2007-01-01

    In this dissertation, Ulrich Straumann of the Swiss Federal Institute of Technology in Zurich, Switzerland, discusses the reduction of sound emissions from overhead power lines. Corona-discharges occur during wet weather or when foggy or icing conditions prevail. Apart from these wide-band crackling noises, low-frequency, tonal emissions also occur. The CONOR (Corona Noise Reduction) project examined these emissions at a frequency of twice the mains frequency and looked for economically feasible solutions to the problems caused by them. The source of these emissions and the mechanisms causing them are discussed. Also, ways of calculating their strength are presented. The effects of varying cable geometry and construction are discussed, as are hydrophilic coatings that could be used to reduce sound emissions.

  16. The 2008 M7.9 Wenchuan earthquake - a human-caused event

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    A catalog of global human-caused earthquakes shows statistical evidence that the triggering of earthquakes by large-scale geoengineering activities depends on geological and tectonic constrains (in Klose 2013). Such geoengineering activities also include the filling of water reservoirs. This presentation illuminates mechanical and statistical aspects of the 2008 M7.9 Wenchuan earthquake in light of the hypothesis of being NOT human-caused. However, available data suggest that the Wenchuan earthquake was triggered by the filling of the Zipungpu water reservoir 30 months prior to the mainshock. The reservoir spatially extended parallel and near to the main Beichuan fault zone in a highly stressed reverse fault regime. It is mechanically evident that reverse faults tend to be very trigger-sensitive due to mass shifts (static loads) that occur on the surface of the Earth's crust. These circumstances made a triggering of a seismic event of this magnitude at this location possible (in Klose 2008, 2012). The data show that the Wenchuan earthquake is not an outlier. From a statistical view point, the earthquake falls into the upper range of the family of reverse fault earthquakes that were caused by humans worldwide.

  17. Selective attention to sound location or pitch studied with fMRI.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  18. National Oceanic and Atmospheric Administration's Cetacean and Sound Mapping Effort: Continuing Forward with an Integrated Ocean Noise Strategy.

    Science.gov (United States)

    Harrison, Jolie; Ferguson, Megan; Gedamke, Jason; Hatch, Leila; Southall, Brandon; Van Parijs, Sofie

    2016-01-01

    To help manage chronic and cumulative impacts of human activities on marine mammals, the National Oceanic and Atmospheric Administration (NOAA) convened two working groups, the Underwater Sound Field Mapping Working Group (SoundMap) and the Cetacean Density and Distribution Mapping Working Group (CetMap), with overarching effort of both groups referred to as CetSound, which (1) mapped the predicted contribution of human sound sources to ocean noise and (2) provided region/time/species-specific cetacean density and distribution maps. Mapping products were presented at a symposium where future priorities were identified, including institutionalization/integration of the CetSound effort within NOAA-wide goals and programs, creation of forums and mechanisms for external input and funding, and expanded outreach/education. NOAA is subsequently developing an ocean noise strategy to articulate noise conservation goals and further identify science and management actions needed to support them.

  19. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Sound-proof Sandwich Panel Design via Metamaterial Concept

    Science.gov (United States)

    Sui, Ni

    Sandwich panels consisting of hollow core cells and two face-sheets bonded on both sides have been widely used as lightweight and strong structures in practical engineering applications, but with poor acoustic performance especially at low frequency regime. Basic sound-proof methods for the sandwich panel design are spontaneously categorized as sound insulation and sound absorption. Motivated by metamaterial concept, this dissertation presents two sandwich panel designs without sacrificing weight or size penalty: A lightweight yet sound-proof honeycomb acoustic metamateiral can be used as core material for honeycomb sandwich panels to block sound and break the mass law to realize minimum sound transmission; the other sandwich panel design is based on coupled Helmholtz resonators and can achieve perfect sound absorption without sound reflection. Based on the honeycomb sandwich panel, the mechanical properties of the honeycomb core structure were studied first. By incorporating a thin membrane on top of each honeycomb core, the traditional honeycomb core turns into honeycomb acoustic metamaterial. The basic theory for such kind of membrane-type acoustic metamaterial is demonstrated by a lumped model with infinite periodic oscillator system, and the negative dynamic effective mass density for clamped membrane is analyzed under the membrane resonance condition. Evanescent wave mode caused by negative dynamic effective mass density and impedance methods are utilized to interpret the physical phenomenon of honeycomb acoustic metamaterials at resonance. The honeycomb metamaterials can extraordinarily improve low-frequency sound transmission loss below the first resonant frequency of the membrane. The property of the membrane, the tension of the membrane and the numbers of attached membranes can impact the sound transmission loss, which are observed by numerical simulations and validated by experiments. The sandwich panel which incorporates the honeycomb metamateiral as

  1. 38 CFR 3.312 - Cause of death.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Cause of death. 3.312... Cause of death. (a) General. The death of a veteran will be considered as having been due to a service... contributory cause of death. The issue involved will be determined by exercise of sound judgment, without...

  2. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  3. Locked-in syndrome caused by the pressure exerted by the sound gun

    Directory of Open Access Journals (Sweden)

    Ayse Belin Ozer

    2014-01-01

    Full Text Available A 19-year-old male patient who wounded himself with a gun in the cranial region had a Glasgow coma scale of 3E. At posttraumatic day 7, locked-in syndrome was considered upon detection of vertical eye movements, meaningful winks, and quadriplegia. Apart from the classical view, computed tomography (CT and postmortem examination of the brain showed an infarct area in the cerebellum. However, vertebrobasilar artery system was normal. In this case report, we would like to present that unlike cases with ischemia, specific CT findings may not be evident in posttraumatic cases and ischemia may occur in the cerebellum as a result of the pressure exerted by a sound gun.

  4. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  5. Small scale features of sound velocity structure in the northern Arabian sea during February - May 1974

    Digital Repository Service at National Institute of Oceanography (India)

    Somayajulu, Y.K.; Rao, L.V.G.; Varadachari, V.V.R.

    at intermediate depths (200-400 m), influence the sound velocity structure and cause formation of an upper sound channel in the northern Arabian Sea. The Persian Gulf waters spread as tongues at 1 or 2 more levels (up to a limited extent), besides the prominent...

  6. Sex-specific asymmetries in communication sound perception are not related to hand preference in an early primate

    Directory of Open Access Journals (Sweden)

    Scheumann Marina

    2008-01-01

    Full Text Available Abstract Background Left hemispheric dominance of language processing and handedness, previously thought to be unique to humans, is currently under debate. To gain an insight into the origin of lateralization in primates, we have studied gray mouse lemurs, suggested to represent the most ancestral primate condition. We explored potential functional asymmetries on the behavioral level by applying a combined handedness and auditory perception task. For testing handedness, we used a forced food-grasping task. For testing auditory perception, we adapted the head turn paradigm, originally established for exploring hemispheric specializations in conspecific sound processing in Old World monkeys, and exposed 38 subjects to control sounds and conspecific communication sounds of positive and negative emotional valence. Results The tested mouse lemur population did not show an asymmetry in hand preference or in orientation towards conspecific communication sounds. However, males, but not females, exhibited a significant right ear-left hemisphere bias when exposed to conspecific communication sounds of negative emotional valence. Orientation asymmetries were not related to hand preference. Conclusion Our results provide the first evidence for sex-specific asymmetries for conspecific communication sound perception in non-human primates. Furthermore, they suggest that hemispheric dominance for communication sound processing evolved before handedness and independently from each other.

  7. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  8. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  9. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  10. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  11. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  12. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  13. A noisy spring: the impact of globally rising underwater sound levels on fish.

    Science.gov (United States)

    Slabbekoorn, Hans; Bouton, Niels; van Opzeeland, Ilse; Coers, Aukje; ten Cate, Carel; Popper, Arthur N

    2010-07-01

    The underwater environment is filled with biotic and abiotic sounds, many of which can be important for the survival and reproduction of fish. Over the last century, human activities in and near the water have increasingly added artificial sounds to this environment. Very loud sounds of relatively short exposure, such as those produced during pile driving, can harm nearby fish. However, more moderate underwater noises of longer duration, such as those produced by vessels, could potentially impact much larger areas, and involve much larger numbers of fish. Here we call attention to the urgent need to study the role of sound in the lives of fish and to develop a better understanding of the ecological impact of anthropogenic noise. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Spatial distribution of human-caused forest fires in Galicia (NW Spain)

    Science.gov (United States)

    M. L. Chas-Amil; J. Touza; P. Prestemon

    2010-01-01

    It is crucial for fire prevention policies to assess the spatial patterns of human-started fires and their relationship with geographical and socioeconomic aspects. This study uses fire reports for the period 1988-2006 in Galicia, Spain, to analyze the spatial distribution of human-induced fire risk attending to causes and underlying motivations associated with fire...

  15. Perceptions and opinions regarding human papilloma virus vaccination among young women in Malaysia.

    Science.gov (United States)

    Al-Naggar, Redhwan Ahmed; Al-Jashamy, Karim; Chen, Robert

    2010-01-01

    The objective of this study is to explore the perceptions and opinions of young women about human papilloma virus (HPV) vaccination and associated barriers. This qualitative in-depth interview study was conducted in January 2010 with 30 university students from different faculties, i.e.:International Medical School (IMS), Faculty of Health and Life Sciences (FHLS), Faculty of Business Management and Professional Studies (FBMP) and Faculty of Information Sciences and Engineering (FISE) of the Management and Science University (MSU), Shah Alam, Malaysia. After consent was obtained from all participants, the interviewer wrote down the conversations during the interview sessions. The data obtained were classified into various categories and analyzed manually. The majority of participants 25 (83%) had heard about cervical cancer, while 16 (53.3%) have never heard of HPV. Only five participants (17%) mentioned that HPV is the cause of cervical cancer. Ten participants (33.3%) did not know any causes. The majority 16 (53.3%) did not know the mode of HPV transmission. The majority of participants 22 (73.3%) mentioned that they had not been vaccinated against HPV. Out of 22, 16 (53.3%) agreed to be vaccinated in the future to protect themselves from cervical cancer and five (17%) participants mentioned they are not willing because of the uncertain safety of the available vaccines and their side effects. This study showed relatively poor knowledge about HPV and its vaccines, pointing to urgency of educational campaigns aimed at students in the public and government universities to promote HPV vaccination among this highly eligible population.

  16. Analysis of the HVAC system's sound quality using the design of experiments

    International Nuclear Information System (INIS)

    Park, Sang Gil; Sim, Hyun Jin; Yoon, Ji Hyun; Jeong, Jae Eun; Choi, Byoung Jae; Oh, Jae Eung

    2009-01-01

    Human hearing is very sensitive to sound, so a subjective index of sound quality is required. Each situation of sound evaluation is composed of Sound Quality (SQ) metrics. When substituting the level of one frequency band, we could not see the tendency of substitution at the whole frequency band during SQ evaluation. In this study, the Design of Experiments (DOE) is used to analyze noise from an automotive Heating, Ventilating, and Air Conditioning (HVAC) system. The frequency domain is divided into 12 equal parts, and each level of the domain is given an increase or decrease due to the change in frequency band based on the 'loud' and 'sharp' sound of the SQ analyzed. By using DOE, the number of tests is effectively reduced by the number of experiments, and the main result is a solution at each band. SQ in terms of the 'loud' and 'sharp' sound at each band, the change in band (increase or decrease in sound pressure) or no change in band will have the most effect on the identifiable characteristics of SQ. This will enable us to select the objective frequency band. Through the results obtained, the physical level changes in arbitrary frequency domain sensitivity can be determined

  17. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  18. Sounds of space: listening to the Sun-Earth connection

    Science.gov (United States)

    Craig, N.; Mendez, B.; Luhmann, J.; Sircar, I.

    2003-04-01

    NASA's STEREO/IMPACT Mission includes an Education and Public Outreach component that seeks to offer national programs for broad audiences highlighting the mission's solar and geo-space research. In an effort to make observations of the Sun more accessible and exciting for a general audience, we look for alternative ways to represent the data. Scientists most often represent data visually in images, graphs, and movies. However, any data can also be represented as sound audible to the human ear, a process known as sonification. We will present our plans for an exciting prototype program that converts the science results of solar energetic particle data to sound. We plan to make sounds, imagery, and data available to the public through the World Wide Web where they may create their own sonifications, as well as integrate this effort to a science museum kiosk format. The kiosk station would include information on the STEREO mission and monitors showing images of the Sun from each of STEREO's two satellites. Our goal is to incorporate 3D goggles and a headset into the kiosk, allowing visitors to see the current or archived images in 3D and hear stereo sounds resulting from sonification of the corresponding data. Ultimately, we hope to collaborate with composers and create musical works inspired by these sounds and related solar images.

  19. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  20. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  1. Townet database - Evaluating the ecological health of Puget Sound's pelagic foodweb

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — To evaluate effects of human influence on the health of Puget Sound's pelagic ecosystems, we propose a sampling program across multiple oceanographic basins...

  2. Chrpsomva bezziana, The Cause of Myiasis on animal And Human : Problem and Control

    Directory of Open Access Journals (Sweden)

    April H Wardhana

    2006-09-01

    Full Text Available Myiasis is an infestation of larvae (Diptera into the live host tissue of warm-blooded animals including humans . This disease is often found in tropical countries, particularly in the community with low socio-economic level. From many flies causing myiasis, Chrysomya bezziana is medically the most important agent due to its obligate parasite property and causing economies losses . Some myiasis cases on humans and animals in Indonesia are caused by C. bezziana larvae infestation or mixed infestation with Sarcophaga sp . Sulawesi, East Sumba, Lombok, Sumbawa, Papua and Java islands were reported as myiasis endemic areas . Myiasis cases on animals occurred after parturition (vulval myiasis then is followed by umbilical myiasis on their calf or traumatic wounds, while myiasis on humans are caused by untreated fresh wounds or chronic wounds such as leprosy, diabetes, etc . Besides, nature holes like nose, eyes, ears or mouth are also reported as entry port for those larvae . Clinical signs of myiasis are various and non-specific depends on location of infested part of body, i.e . fever, inflammation, pruritus, headache, vertigo, swelling and hipereosinophilia . There would be serious conditions with secondary infection by bacteria . Myiasis treatment on animals is simpler than humans . Surgical operation is often carried out on infested human part of bodies . Insecticides were used to treat animal myiasis but had raised resistant . Myiasis treatment on humans may be done locally or systemically . Antibiotic broad spectrum or which is suitable with culture and resistance status of bacteria were given for systemic treatment . Chloroform and turpentine with ratio 1 : 4 were used for local treatment . Some of essential oils have also been tested in laboratory as an alternative medicine for both humans and animals myiasis .

  3. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  4. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  5. Volcanism, Iron, and Phytoplankton in the Heard and McDonald Islands Region, Southern Indian Ocean

    Science.gov (United States)

    Coffin, M. F.; Arculus, R. J.; Bowie, A. R.; Chase, Z.; Robertson, R.; Trull, T. W.; Heobi in2016 v01 Shipboard Party, T.

    2016-12-01

    Phytoplankton supply approximately half of the oxygen in Earth's atmosphere, and iron supply limits the growth of phytoplankton in the anemic Southern Ocean. Situated entirely within the Indian Ocean sector of the Southern Ocean are Australia's only active subaerial volcanoes, Heard and McDonald islands (HIMI) on the central Kerguelen Plateau, a large igneous province. Widespread fields of submarine volcanoes, some of which may be active, extend for distances of up to several hundred kilometers from the islands. The predominantly eastward-flowing Antarctic Circumpolar Current sweeps across the central Kerguelen Plateau, and extensive blooms of phytoplankton are observed on the Plateau down-current of HIMI. The goal of RV Investigator voyage IN2016_V01, conducted in January/February 2016, is to test the hypothesis that hydrothermal fluids, which cool active submarine volcanoes in the HIMI region, ascend from the seafloor and fertilise surface waters with iron, thereby enhancing biological productivity beginning with phytoplankton. Significant initial shipboard results include: Documentation, for the first time, of the role of active HIMI and nearby submarine volcanoes in supplying iron to the Southern Ocean. Nearshore waters had elevated dissolved iron levels. Although biomass was not correspondingly elevated, fluorescence induction data indicated highly productive resident phytoplankton. Discovery of >200 acoustic plumes emanating from the seafloor and ascending up to tens of meters into the water column near HIMI. Deep tow camera footage shows bubbles rising from the seafloor in an acoustic plume field north of Heard Island. Mapping 1,000 km2 of uncharted seafloor around HIMI. Submarine volcanic edifices punctuate the adjacent seafloor, and yielded iron-rich rocks similar to those found on HIMI, respectively. Acoustic plumes emanating from some of these features suggest active seafloor hydrothermal systems.

  6. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  7. Performances of Student Activism: Sound, Silence, Gender, and Dis/ability

    Science.gov (United States)

    Pasque, Penny A.; Vargas, Juanita Gamez

    2014-01-01

    This chapter explores the various performances of activism by students through sound, silence, gender, and dis/ability and how these performances connect to social change efforts around issues such as human trafficking, homeless children, hunger, and children with varying abilities.

  8. Using electronic storybooks to support word learning in children with severe language impairments.

    Science.gov (United States)

    Smeets, Daisy J H; van Dijken, Marianne J; Bus, Adriana G

    2014-01-01

    Novel word learning is reported to be problematic for children with severe language impairments (SLI). In this study, we tested electronic storybooks as a tool to support vocabulary acquisition in SLI children. In Experiment 1, 29 kindergarten SLI children heard four e-books each four times: (a) two stories were presented as video books with motion pictures, music, and sounds, and (b) two stories included only static illustrations without music or sounds. Two other stories served as the control condition. Both static and video books were effective in increasing knowledge of unknown words, but static books were most effective. Experiment 2 was designed to examine which elements in video books interfere with word learning: video images or music or sounds. A total of 23 kindergarten SLI children heard 8 storybooks each four times: (a) two static stories without music or sounds, (b) two static stories with music or sounds, (c) two video stories without music or sounds, and (d) two video books with music or sounds. Video images and static illustrations were equally effective, but the presence of music or sounds moderated word learning. In children with severe SLI, background music interfered with learning. Problems with speech perception in noisy conditions may be an underlying factor of SLI and should be considered in selecting teaching aids and learning environments. © Hammill Institute on Disabilities 2012.

  9. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  10. SOUND LABOR RELATIONS AT ENTERPRISE LEVEL IN THAILAND

    Directory of Open Access Journals (Sweden)

    Vichai Thosuwonchinda

    2016-07-01

    Full Text Available The objective of this research was to study the pattern of sound labor relations in Thailand in order to reduce conflicts between employers and workers and create cooperation. The research was based on a qualitative approach, using in-depth interview with 10 stakeholder groups of Thai industrial relations system. They were employees of non unionized companies at the shop floor level, employees of non unionized companies at the supervisor level, trade union leaders at the company level, trade union leaders at the national level, employers of non-unionized companies, employers’ organization leaders, and human resource managers, members of tripartite bodies, government officials and labor academics. The findings were presented in a model identifying 5 characteristics that enhance sound relations in Thailand, i.e. recognition between employer and workers, good communication, trust, data revealing and workers’ participation. It was suggested that all parties, employers, workers and the government should take part in the promotion of sound labor relations. The employer have to acknowledge labor union with a positive attitude, have good communication with workers , create trust with workers, disclose information, create culture of mutual benefits as well as accept sincerely the system that include workers’ participation. Workers need a strong labor union, good and sincere representatives for clear communication, trust, mutual benefits and seek conflict solutions with employer by win-win strategy. The government has a supporting role in adjusting the existing laws in the appropriate way, by creating policy for sound labor relations, and putting the idea of sound labor relations into practice.

  11. A numerical investigation of the influence of windscreens on measurement of sound intensity

    DEFF Research Database (Denmark)

    Juhl, Peter Møller; Jacobsen, Finn

    2006-01-01

    at low frequencies in strongly reactive sound fields. The theoretical part of this study was based on the assumption of a windscreen of infinite extent. In this paper windscreens of realistic size and shape are dealt with by means of a coupled boundary element model for the windscreen and the surrounding...... air. The error of the estimated intensity caused by the windscreen is calculated under a number of sound field conditions of varying reactivity. It is shown that the resulting error can be much larger than the intensity itself in a very reactive sound field. It is also shown that the shape and size...

  12. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  13. Investigating the 'Ranchlands Hum': successes and frustrations

    Energy Technology Data Exchange (ETDEWEB)

    Patching, Richard [Patching Associates Acoustical Engineering Ltd (Canada)], email: rpatching@patchingassociates.com; Epstein, Marcia [University of Calgary (Canada)], email: epstein@ucalgary.ca

    2011-07-01

    It has been reported that for nearly two years, residents of the Ranchlands neighbourhood in Calgary have been complaining about an intermittent, low frequency humming noise. Researchers decided to investigate this phenomenon as part of their ongoing research on the effects of noise on health. Ethics and respect for the residents' privacy required that a voluntary questionnaire be distributed to Ranchlands residents to determine if the humming noise was real and what its possible sources might be. Sound level measurements were also made in the homes of willing residents. Statistical and technical results showed at least 2 hums around the 40 Hz frequency range. Further analysis is required, however, to exclude tinnitus as a possible reason for the hum heard by some residents. Investigations of the source of the humming sound detected two possible causes: an air conditioning system at a TELUS building, and a water treatment plant. Further studies should specify the correlation between possible causes and the Ranchlands hum.

  14. Investigating the 'Ranchlands Hum': successes and frustrations

    Energy Technology Data Exchange (ETDEWEB)

    Patching, Richard [Patching Associates Acoustical Engineering Ltd (Canada); Epstein, Marcia [University of Calgary (Canada)

    2011-07-01

    It has been reported that for nearly two years, residents of the Ranchlands neighbourhood in Calgary have been complaining about an intermittent, low frequency humming noise. Researchers decided to investigate this phenomenon as part of their ongoing research on the effects of noise on health. Ethics and respect for the residents' privacy required that a voluntary questionnaire be distributed to Ranchlands residents to determine if the humming noise was real and what its possible sources might be. Sound level measurements were also made in the homes of willing residents. Statistical and technical results showed at least 2 hums around the 40 Hz frequency range. Further analysis is required, however, to exclude tinnitus as a possible reason for the hum heard by some residents. Investigations of the source of the humming sound detected two possible causes: an air conditioning system at a TELUS building, and a water treatment plant. Further studies should specify the correlation between possible causes and the Ranchlands hum.

  15. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  16. 78 FR 19299 - Notice of Inventory Completion: Slater Museum of Natural History, University of Puget Sound...

    Science.gov (United States)

    2013-03-29

    ... DEPARTMENT OF THE INTERIOR National Park Service [NPS-WASO-NAGPRA-12395; PPWOCRADN0-PCU00RP14.R50000] Notice of Inventory Completion: Slater Museum of Natural History, University of Puget Sound... History, University of Puget Sound, has completed an inventory of human remains in consultation with the...

  17. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound

    Science.gov (United States)

    Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%–100%) and a recall of 93.9% (range: 72.7%–100%) for the 71 episodes of dry swallows. PMID:27170905

  18. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound.

    Science.gov (United States)

    Jayatilake, Dushyantha; Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%-100%) and a recall of 93.9% (range: 72.7%-100%) for the 71 episodes of dry swallows.

  19. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  20. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  1. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  2. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  3. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  4. Wind turbines: is there a human health risk?

    Science.gov (United States)

    Roberts, Jennifer D; Roberts, Mark A

    2013-04-01

    The term "Wind Turbine Syndrome" was coined in a recently self-published book, which hypothesized that a multitude of symptoms such as headache and dizziness resulted from wind turbines generating low frequency sound (LFS). The objective of this article is to provide a summary of the peer-reviewed literature on the research that has examined the relationship between human health effects and exposure to LFS and sound generated from the operation of wind turbines. At present, a specific health condition has not been documented in the peer-reviewed literature that has been classified as a disease caused by exposure to sound levels and frequencies generated by the operation of wind turbines. Communities are experiencing a heightened sense of annoyance and fear from the development and siting of wind turbine farms. High-quality research and effective risk communication can advance this course from one of panic to one of understanding and exemplification for other environmental advancements.

  5. Pre-attentive processing of spectrally complex sounds with asynchronous onsets: an event-related potential study with human subjects.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Näätänen, R

    1997-05-23

    Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.

  6. Low frequency sound field control for loudspeakers in rectangular rooms using CABS (Controlled Acoustical Bass System)

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2010-01-01

    Rectangular rooms are the most common shape for sound reproduction, but at low frequencies the reflections from the boundaries of the room cause large spatial variations in the sound pressure level.  Variations up to 30 dB are normal, not only at the room modes, but basically at all frequencies....... As sound propagates in time, it seems natural that the problems can best be analyzed and solved in the time domain. A time based room correction system named CABS (Controlled Acoustical Bass System) has been developed for sound reproduction in rectangular listening rooms. It can control the sound...... sound field in the whole room, and short impulse response.  In a standard listening room (180 m3) only 4 loudspeakers are needed, 2 more than a traditional stereo setup. CABS is controlled by a developed DSP system. The time based approached might help with the understanding of sound field control...

  7. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  8. Human-Wildlife Conflicts in Nepal: Patterns of Human Fatalities and Injuries Caused by Large Mammals.

    Science.gov (United States)

    Acharya, Krishna Prasad; Paudel, Prakash Kumar; Neupane, Prem Raj; Köhl, Michael

    2016-01-01

    Injury and death from wildlife attacks often result in people feeling violent resentment and hostility against the wildlife involved and, therefore, may undermine public support for conservation. Although Nepal, with rich biodiversity, is doing well in its conservation efforts, human-wildlife conflicts have been a major challenge in recent years. The lack of detailed information on the spatial and temporal patterns of human-wildlife conflicts at the national level impedes the development of effective conflict mitigation plans. We examined patterns of human injury and death caused by large mammals using data from attack events and their spatiotemporal dimensions collected from a national survey of data available in Nepal over five years (2010-2014). Data were analyzed using logistic regression and chi-square or Fisher's exact tests. The results show that Asiatic elephants and common leopards are most commonly involved in attacks on people in terms of attack frequency and fatalities. Although one-horned rhinoceros and bears had a higher frequency of attacks than Bengal tigers, tigers caused more fatalities than each of these two species. Attacks by elephants peaked in winter and most frequently occurred outside protected areas in human settlements. Leopard attacks occurred almost entirely outside protected areas, and a significantly greater number of attacks occurred in human settlements. Attacks by one-horned rhinoceros and tigers were higher in the winter, mainly in forests inside protected areas; similarly, attacks by bears occurred mostly within protected areas. We found that human settlements are increasingly becoming conflict hotspots, with burgeoning incidents involving elephants and leopards. We conclude that species-specific conservation strategies are urgently needed, particularly for leopards and elephants. The implications of our findings for minimizing conflicts and conserving these imperiled species are discussed.

  9. Human-Wildlife Conflicts in Nepal: Patterns of Human Fatalities and Injuries Caused by Large Mammals

    Science.gov (United States)

    Acharya, Krishna Prasad; Paudel, Prakash Kumar; Neupane, Prem Raj; Köhl, Michael

    2016-01-01

    Injury and death from wildlife attacks often result in people feeling violent resentment and hostility against the wildlife involved and, therefore, may undermine public support for conservation. Although Nepal, with rich biodiversity, is doing well in its conservation efforts, human-wildlife conflicts have been a major challenge in recent years. The lack of detailed information on the spatial and temporal patterns of human-wildlife conflicts at the national level impedes the development of effective conflict mitigation plans. We examined patterns of human injury and death caused by large mammals using data from attack events and their spatiotemporal dimensions collected from a national survey of data available in Nepal over five years (2010–2014). Data were analyzed using logistic regression and chi-square or Fisher's exact tests. The results show that Asiatic elephants and common leopards are most commonly involved in attacks on people in terms of attack frequency and fatalities. Although one-horned rhinoceros and bears had a higher frequency of attacks than Bengal tigers, tigers caused more fatalities than each of these two species. Attacks by elephants peaked in winter and most frequently occurred outside protected areas in human settlements. Leopard attacks occurred almost entirely outside protected areas, and a significantly greater number of attacks occurred in human settlements. Attacks by one-horned rhinoceros and tigers were higher in the winter, mainly in forests inside protected areas; similarly, attacks by bears occurred mostly within protected areas. We found that human settlements are increasingly becoming conflict hotspots, with burgeoning incidents involving elephants and leopards. We conclude that species-specific conservation strategies are urgently needed, particularly for leopards and elephants. The implications of our findings for minimizing conflicts and conserving these imperiled species are discussed. PMID:27612174

  10. Not all carp are created equal: Impacts of broadband sound on common carp swimming behavior

    Science.gov (United States)

    Murchy, Kelsie; Vetter, Brooke J.; Brey, Marybeth; Amberg, Jon J.; Gaikowski, Mark; Mensinger, Allen F.

    2016-01-01

    Bighead carp (Hypophthalmichthys nobilis), silver carp (H. molitrix) (hereafter: bigheaded carps), and common carp (Cyprinus carpio) are invasive fish causing negative impacts throughout their North American range. To control their movements, non-physical barriers are being developed. Broadband sound (0.06 to 10 kHz) has shown potential as an acoustic deterrent for bigheaded carps, but the response of common carp to broadband sound has not been evaluated. Since common carp are ostariophysians, possessing Weberian ossicles similar to bigheaded carps, it is possible that sound can be used as an acoustical deterrent for all three species. Behavioral responses to a broadband sound were evaluated for common carp in an outdoor concrete pond. Common carp responded a median of 3.0 (1st Q: 1.0, 3rd Q: 6.0) consecutive times to the broadband sound which was lower than silver carp and bighead carp to the same stimulus. The current study shows that common carp demonstrate an inconsistent negative phonotaxis response to a broadband sound, and seem to habituate to the sound quickly.

  11. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  12. [Study for lung sound acquisition module based on ARM and Linux].

    Science.gov (United States)

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  13. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    Science.gov (United States)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  14. A RARE CASE OF SINUS OF VALSALVA ANEURYSM PRESENTING WITH TRICUSPID STENOSIS AND RIGHT HEART FAILURE

    Directory of Open Access Journals (Sweden)

    P. V. R. S. Subrahmanya Sarma

    2017-12-01

    Full Text Available PRESENTATION OF CASE A female patient of age 48 years came with the complaints of dyspnoea on exertion, no history of orthopnoea or PND attacks. There is history of easy fatigability and mild abdominal distension, since past 3 months. On clinical examination, she is moderately built and nourished. There was no pallor, cyanosis, clubbing, lymphadenopathy, oedema and icterus. Family history was not significant. She was conscious and coherent. Vitals were within the normal limits. Her BP being 120/76 mmHg. She was found to have an elevated JVP up to angle of the mandibule with a prominent "A" wave, and on palpation, there are no thrills or sounds palpable and on auscultation first heart sound and a normal split second heart sounds were heard with no added sounds or murmurs being heard and the presence of free fluid in the abdomen was confirmed. Hepatomegaly was also noticed. Clinically, she was thought to have right heart failure. Her ECG showed that she was in atrial fibrillation with controlled ventricular rate.

  15. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  16. Experience with diagnosis of root causes of human performance problems in Indian nuclear power plants

    International Nuclear Information System (INIS)

    Bhattacharya, A.S.

    1997-01-01

    Low capacity factor, in any NPP, is a result of high occurrence rates of significant events. A substantial portion of such occurrences is caused by inappropriate action due to inadequate human performance. To improve human performance we need first to do its evaluation. This paper describes the essential elements of the first basic step in that context: diagnosis or identification of the fundamental causes of human performance problems in Indian NPPs. (author)

  17. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  18. Validation of an auditory sensory reinforcement paradigm: Campbell's monkeys (Cercopithecus campbelli) do not prefer consonant over dissonant sounds.

    Science.gov (United States)

    Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban

    2013-08-01

    The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.

  19. SOUND-SPEED INVERSION OF THE SUN USING A NONLOCAL STATISTICAL CONVECTION THEORY

    International Nuclear Information System (INIS)

    Zhang Chunguang; Deng Licai; Xiong Darun; Christensen-Dalsgaard, Jørgen

    2012-01-01

    Helioseismic inversions reveal a major discrepancy in sound speed between the Sun and the standard solar model just below the base of the solar convection zone. We demonstrate that this discrepancy is caused by the inherent shortcomings of the local mixing-length theory adopted in the standard solar model. Using a self-consistent nonlocal convection theory, we construct an envelope model of the Sun for sound-speed inversion. Our solar model has a very smooth transition from the convective envelope to the radiative interior, and the convective energy flux changes sign crossing the boundaries of the convection zone. It shows evident improvement over the standard solar model, with a significant reduction in the discrepancy in sound speed between the Sun and local convection models.

  20. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    Science.gov (United States)

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  2. Fin whale sound reception mechanisms: skull vibration enables low-frequency hearing.

    Directory of Open Access Journals (Sweden)

    Ted W Cranford

    Full Text Available Hearing mechanisms in baleen whales (Mysticeti are essentially unknown but their vocalization frequencies overlap with anthropogenic sound sources. Synthetic audiograms were generated for a fin whale by applying finite element modeling tools to X-ray computed tomography (CT scans. We CT scanned the head of a small fin whale (Balaenoptera physalus in a scanner designed for solid-fuel rocket motors. Our computer (finite element modeling toolkit allowed us to visualize what occurs when sounds interact with the anatomic geometry of the whale's head. Simulations reveal two mechanisms that excite both bony ear complexes, (1 the skull-vibration enabled bone conduction mechanism and (2 a pressure mechanism transmitted through soft tissues. Bone conduction is the predominant mechanism. The mass density of the bony ear complexes and their firmly embedded attachments to the skull are universal across the Mysticeti, suggesting that sound reception mechanisms are similar in all baleen whales. Interactions between incident sound waves and the skull cause deformations that induce motion in each bony ear complex, resulting in best hearing sensitivity for low-frequency sounds. This predominant low-frequency sensitivity has significant implications for assessing mysticete exposure levels to anthropogenic sounds. The din of man-made ocean noise has increased steadily over the past half century. Our results provide valuable data for U.S. regulatory agencies and concerned large-scale industrial users of the ocean environment. This study transforms our understanding of baleen whale hearing and provides a means to predict auditory sensitivity across a broad spectrum of sound frequencies.

  3. Between Precautionary Principle and "Sound Science": Distributing the Burdens of Proof

    NARCIS (Netherlands)

    Belt, v.d. H.; Gremmen, B.

    2002-01-01

    Opponents of biotechnology ofteninvoke the Precautionary Principle to advancetheir cause, whereas biotech enthusiasts preferto appeal to ``sound science.'' Publicauthorities are still groping for a usefuldefinition. A crucial issue in this debate isthe distribution of the burden of proof amongthe

  4. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  6. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  7. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  8. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  9. Acoustic resonators for the reduction of sound radiation and transmission

    NARCIS (Netherlands)

    Hannink, M.H.C.

    2007-01-01

    Noise is a frequently encountered problem in modern society. One of the environments where the presence of noise causes a deterioration in people’s comfort is in aircraft cabins. For modern aircraft flying at cruise conditions, the main sound source is the turbulent boundary layer around the

  10. The Sound of Data (a gentle introduction to sonification for historians

    Directory of Open Access Journals (Sweden)

    Shawn Graham

    2016-06-01

    Full Text Available ποίησις - fabrication, creation, production I am too tired of seeing the past. There are any number of guides that will help you visualize that past which cannot be seen, but often we forget what a creative act visualization is. We are perhaps too tied to our screens, too much invested in ‘seeing’. Let me hear something of the past instead. While there is a deep history and literature on archaeoacoustics and soundscapes that try to capture the sound of a place as it was (see for instance the Virtual St. Paul’s or the work of Jeff Veitch on ancient Ostia, I am interested instead to ’sonify’ what I have right now, the data themselves. I want to figure out a grammar for representing data in sound that is appropriate for history. Drucker famously reminds us that ‘data’ are not really things given, but rather things captured, things transformed: that is to say, ‘capta’. In sonifying data, I literally perform the past in the present, and so the assumptions, the transformations, I make are foregrounded. The resulting aural experience is a literal ‘deformance’ (portmanteau of ‘deform’ and ‘perform’ that makes us hear modern layers of the past in a new way. I want to hear the meaning of the past, but I know that I can’t. Nevertheless, when I hear an instrument, I can imagine the physicality of the player playing it; in its echoes and resonances I can discern the physical space. I can feel the bass; I can move to the rhythm. The music engages my whole body, my whole imagination. Its associations with sounds, music, and tones I’ve heard before create a deep temporal experience, a system of embodied relationships between myself and the past. Visual? We have had visual representations of the past for so long, we have almost forgotten the artistic and performative aspect of those grammars of expression. In this tutorial, you will learn to make some noise from your data about the past. The meaning of that noise, well

  11. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  12. Do high sound pressure levels of crowing in roosters necessitate passive mechanisms for protection against self-vocalization?

    Science.gov (United States)

    Claes, Raf; Muyshondt, Pieter G G; Dirckx, Joris J J; Aerts, Peter

    2018-02-01

    High sound pressure levels (>120dB) cause damage or death of the hair cells of the inner ear, hence causing hearing loss. Vocalization differences are present between hens and roosters. Crowing in roosters is reported to produce sound pressure levels of 100dB measured at a distance of 1m. In this study we measured the sound pressure levels that exist at the entrance of the outer ear canal. We hypothesize that roosters may benefit from a passive protective mechanism while hens do not require such a mechanism. Audio recordings at the level of the entrance of the outer ear canal of crowing roosters, made in this study, indeed show that a protective mechanism is needed as sound pressure levels can reach amplitudes of 142.3dB. Audio recordings made at varying distances from the crowing rooster show that at a distance of 0.5m sound pressure levels already drop to 102dB. Micro-CT scans of a rooster and chicken head show that in roosters the auditory canal closes when the beak is opened. In hens the diameter of the auditory canal only narrows but does not close completely. A morphological difference between the sexes in shape of a bursa-like slit which occurs in the outer ear canal causes the outer ear canal to close in roosters but not in hens. Copyright © 2017 Elsevier GmbH. All rights reserved.

  13. Dynamics of unstable sound waves in a non-equilibrium medium at the nonlinear stage

    Science.gov (United States)

    Khrapov, Sergey; Khoperskov, Alexander

    2018-03-01

    A new dispersion equation is obtained for a non-equilibrium medium with an exponential relaxation model of a vibrationally excited gas. We have researched the dependencies of the pump source and the heat removal on the medium thermodynamic parameters. The boundaries of sound waves stability regions in a non-equilibrium gas have been determined. The nonlinear stage of sound waves instability development in a vibrationally excited gas has been investigated within CSPH-TVD and MUSCL numerical schemes using parallel technologies OpenMP-CUDA. We have obtained a good agreement of numerical simulation results with the linear perturbations dynamics at the initial stage of the sound waves growth caused by instability. At the nonlinear stage, the sound waves amplitude reaches the maximum value that leads to the formation of shock waves system.

  14. Acute prurigo simplex in humans caused by pigeon lice.

    Science.gov (United States)

    Stolf, Hamilton Ometto; Reis, Rejane d'Ávila; Espósito, Ana Cláudia Cavalcante; Haddad Júnior, Vidal

    2018-03-01

    Pigeon lice are insects that feed on feathers of these birds; their life cycle includes egg, nymph and adult and they may cause dermatoses in humans. Four persons of the same family, living in an urban area, presented with widespread intensely pruritic erythematous papules. A great number of lice were seen in their house, which moved from a nest of pigeons located on the condenser of the air-conditioning to the dormitory of one of the patients. Even in urban environments, dermatitis caused by parasites of birds is a possibility in cases of acute prurigo simplex. Pigeon lice are possible etiological agents of this kind of skin eruption, although they are often neglected, even by dermatologists.

  15. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  16. Phonocardiography with a Smartphone

    Science.gov (United States)

    Thoms, Lars-Jochen; Colicchia, Giuseppe; Girwidz, Raimund

    2017-01-01

    When a stethoscope is placed on the chest over the heart, sounds coming from the heart can be directly heard. These sound vibrations can be captured through a microphone and the electrical signals from the transducer can be processed and plotted in a phonocardiogram. Students can easily use a microphone and smartphone to capture and analyse…

  17. Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.

    Science.gov (United States)

    Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro

    2018-03-09

    Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.

  18. Environmentally sound management of hazardous waste and hazardous recyclable materials

    International Nuclear Information System (INIS)

    Smyth, T.

    2002-01-01

    Environmentally sound management or ESM has been defined under the Basel Convention as 'taking all practicable steps to ensure that hazardous wastes and other wastes are managed in a manner which will protect human health and the environment against the adverse effects which may result from such wastes'. An initiative is underway to develop and implement a Canadian Environmentally Sound Management (ESM) regime for both hazardous wastes and hazardous recyclable materials. This ESM regime aims to assure equivalent minimum environmental protection across Canada while respecting regional differences. Cooperation and coordination between the federal government, provinces and territories is essential to the development and implementation of ESM systems since waste management is a shared jurisdiction in Canada. Federally, CEPA 1999 provides an opportunity to improve Environment Canada's ability to ensure that all exports and imports are managed in an environmentally sound manner. CEPA 1999 enabled Environment Canada to establish criteria for environmentally sound management (ESM) that can be applied by importers and exporters in seeking to ensure that wastes and recyclable materials they import or export will be treated in an environmentally sound manner. The ESM regime would include the development of ESM principles, criteria and guidelines relevant to Canada and a procedure for evaluating ESM. It would be developed in full consultation with stakeholders. The timeline for the development and implementation of the ESM regime is anticipated by about 2006. (author)

  19. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  20. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  1. Comparative ecology of capsular Exophiala species causing disseminated infection in humans

    NARCIS (Netherlands)

    Song, Y. (Yinggai); Laureijssen-van de Sande, W.W.J. (Wendy W.J.); Moreno, L.F. (Leandro F.); van den Ende, B.G. (Bert Gerrits); Li, R. (Ruoyu); S. de Hoog (Sybren)

    2017-01-01

    textabstractExophiala spinifera and Exophiala dermatitidis (Fungi: Chaetothyriales) are black yeast agents potentially causing disseminated infection in apparently healthy humans. They are the only Exophiala species producing extracellular polysaccharides around yeast cells. In order to gain

  2. Early age conductive hearing loss causes audiogenic seizure and hyperacusis behavior.

    Science.gov (United States)

    Sun, Wei; Manohar, Senthilvelan; Jayaram, Aditi; Kumaraguru, Anand; Fu, Qiang; Li, Ji; Allman, Brian

    2011-12-01

    Recent clinical reports found a high incidence of recurrent otitis media in children suffering hyperacusis, a marked intolerance to an otherwise ordinary environmental sound. However, it is unclear whether the conductive hearing loss caused by otitis media in early age will affect sound tolerance later in life. Thus, we have tested the effects of tympanic membrane (TM) damage at an early age on sound perception development in rats. Two weeks after the TM perforation, more than 80% of the rats showed audiogenic seizure (AGS) when exposed to loud sound (120 dB SPL white noise, hearing loss recovered. The TM damaged rats also showed significantly enhanced acoustic startle responses compared to the rats without TM damage. These results suggest that early age conductive hearing loss may cause an impaired sound tolerance during development. In addition, the AGS can be suppressed by the treatment of vigabatrin, acute injections (250 mg/kg) or oral intakes (60 mg/kg/day for 7 days), an antiepileptic drug that inhibits the catabolism of GABA. c-Fos staining showed a strong staining in the inferior colliculus (IC) in the TM damaged rats, not in the control rats, after exposed to loud sound, indicating a hyper-excitability in the IC during AGS. These results indicate that early age conductive hearing loss can impair sound tolerance by reducing GABA inhibition in the IC, which may be related to hyperacusis seen in children with otitis media. Published by Elsevier B.V.

  3. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  4. Crowing Sound Analysis of Gaga' Chicken; Local Chicken from South Sulawesi Indonesia

    OpenAIRE

    Aprilita Bugiwati, Sri Rachma; Ashari, Fachri

    2008-01-01

    Gaga??? chicken was known as a local chicken at South Sulawesi Indonesia which has unique, specific, and different crowing sound, especially at the ending of crowing sound which is like the voice character of human laughing, comparing with the other types of singing chicken in the world. 287 birds of Gaga??? chicken at 3 districts at the centre habitat of Gaga??? chicken were separated into 2 groups (163 birds of Dangdut type and 124 birds of Slow type) which is based on the speed...

  5. In situ mortality experiments with juvenile sea bass (Dicentrarchus labrax in relation to impulsive sound levels caused by pile driving of windmill foundations.

    Directory of Open Access Journals (Sweden)

    Elisabeth Debusschere

    Full Text Available Impact assessments of offshore wind farm installations and operations on the marine fauna are performed in many countries. Yet, only limited quantitative data on the physiological impact of impulsive sounds on (juvenile fishes during pile driving of offshore wind farm foundations are available. Our current knowledge on fish injury and mortality due to pile driving is mainly based on laboratory experiments, in which high-intensity pile driving sounds are generated inside acoustic chambers. To validate these lab results, an in situ field experiment was carried out on board of a pile driving vessel. Juvenile European sea bass (Dicentrarchus labrax of 68 and 115 days post hatching were exposed to pile-driving sounds as close as 45 m from the actual pile driving activity. Fish were exposed to strikes with a sound exposure level between 181 and 188 dB re 1 µPa².s. The number of strikes ranged from 1739 to 3067, resulting in a cumulative sound exposure level between 215 and 222 dB re 1 µPa².s. Control treatments consisted of fish not exposed to pile driving sounds. No differences in immediate mortality were found between exposed and control fish groups. Also no differences were noted in the delayed mortality up to 14 days after exposure between both groups. Our in situ experiments largely confirm the mortality results of the lab experiments found in other studies.

  6. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  7. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  8. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  9. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  10. Design of virtual three-dimensional instruments for sound control

    Science.gov (United States)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object

  11. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  12. Convergent and invariant object representations for sight, sound, and touch.

    Science.gov (United States)

    Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T

    2015-09-01

    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. © 2015 Wiley Periodicals, Inc.

  13. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  14. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  15. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  16. Late summer distribution and stoichiometry of dissolved N, Si and P in the Southern Ocean near Heard and McDonald Islands on the Kerguelen Plateau

    Science.gov (United States)

    Chase, Z.; Bowie, A. R.; Blain, S.; Holmes, T.; Rayner, M.; Sherrin, K.; Tonnard, M.; Trull, T. W.

    2016-12-01

    The Kerguelen plateau in the Southern Indian Ocean is a naturally iron-fertilised region surrounded by iron-limited, High Nutrient Low Chlorophyll waters. The Heard Earth Ocean Biosphere Interaction (HEOBI) project sampled waters south of the Polar Front in the vicinity of Heard and McDonald Islands (HIMI) in January and February 2016. Fe fertilised waters over the plateau generally exhibited high phytoplankton biomass and photosynthetic competency (as in previous studies and satellite observations), but interestingly, phytoplankton biomass was low near HIMI, though photosynthetic competency was high. In plateau waters away from HIMI, silicic acid (Si) concentrations were strongly depleted in surface waters, averaging 3 μM, while nitrate concentrations were close to 25 μM. Relative to the remnant winter water, this represents an average seasonal drawdown of 32 μM Si and only 8 μM nitrate. Though absolute drawdown was lower at an HNLC reference site south of Heard Island, the drawdown ratio was similarly high (ΔSi: ΔN 4-5). The average N:P drawdown ratio was 12, typical for a diatom-dominated system (Weber and Deutsch 2010). N:P drawdown was positively correlated with Si drawdown, perhaps indicative of an impact of Fe on both seasonal Si drawdown and diatom N:P uptake (Price 2005). In the well-mixed, shallow waters (McDonald Islands, despite the apparent lack of nutrient drawdown or biomass accumulation. Mixed layers deeper than the euphotic zone are one mechanism that retains these remineralization signatures and near the islands, tidal mixing also contributes.

  17. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  18. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  19. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  20. Sensory suppression of brain responses to self-generated sounds is observed with and without the perception of agency.

    Science.gov (United States)

    Timm, Jana; Schönwiesner, Marc; Schröger, Erich; SanMiguel, Iria

    2016-07-01

    Stimuli caused by our own movements are given special treatment in the brain. Self-generated sounds evoke a smaller brain response than externally generated ones. This attenuated response may reflect a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input. It may also relate to the feeling of being the agent of the movement and its effects, but little is known about how sensory suppression of brain responses to self-generated sounds is related to judgments of agency. To address this question, we recorded event-related potentials in response to sounds initiated by button presses. In one condition, participants perceived agency over the production of the sounds, whereas in another condition, participants experience an illusory lack of agency caused by changes in the delay between actions and effects. We compared trials in which the timing of button press and sound was physically identical, but participants' agency judgment differed. Results show reduced amplitudes of the auditory N1 component in response to self-generated sounds irrespective of agency experience, whilst P2 effects correlate with the perception of agency. Our findings suggest that suppression of the auditory N1 component to self-generated sounds does not depend on adaptation to specific action-effect time delays, and does not determine agency judgments, however, the suppression of the P2 component might relate more directly to the experience of agency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  2. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  3. Nonlinear generation of non-acoustic modes by low-frequency sound in a vibrationally relaxing gas

    International Nuclear Information System (INIS)

    Perelomova, A.

    2010-01-01

    Two dynamic equations referring to a weakly nonlinear and weakly dispersive flow of a gas in which molecular vibrational relaxation takes place, are derived. The first one governs an excess temperature associated with the thermal mode, and the second one describes variations in vibrational energy. Both quantities refer to non-wave types of gas motion. These variations are caused by the nonlinear transfer of acoustic energy into thermal mode and internal vibrational degrees of freedom of a relaxing gas. The final dynamic equations are instantaneous; they include a quadratic nonlinear acoustic source, reflecting the nonlinear character of interaction of low-frequency acoustic and non-acoustic motions of the fluid. All types of sound, periodic or aperiodic, may serve as an acoustic source of both phenomena. The low-frequency sound is considered in this study. Some conclusions about temporal behavior of non-acoustic modes caused by periodic and aperiodic sound are made. Under certain conditions, acoustic cooling takes place instead of heating. (author)

  4. Prejunctional inhibition of norepinephrine release caused by acetylcholine in the human saphenous vein

    International Nuclear Information System (INIS)

    Rorie, D.K.; Rusch, N.J.; Shepherd, J.T.; Vanhoutte, P.M.; Tyce, G.M.

    1981-01-01

    We performed experiments to determine whether or not acetylcholine exerts a prejunctional inhibitory effect on adrenergic neurotransmission in the human blood vessel wall. Rings of human greater saphenous veins were prepared 2 to 15 hours after death and mounted for isometric tension recording in organ chambers filled with Krebs-Ringer solution. Acetylcholine depressed contractile responses to electric activation of the sympathetic nerve endings significantly more than those to exogenous norepinephrine; the relaxations caused by the cholinergic transmitter were antagonized by atropine. Helical strips were incubated with [/sub 3/H]norepinephrine and mounted for superfusion. Electric stimulation augmented the fractional release of labeled norepinephrine. Acetylcholine caused a depression of the evoked /sub 3/H release which was antagonized by atropine but not by hexamethonium. These experiments demonstrate that, as in animal cutaneous veins, there are prejunctional inhibitory muscarinic receptors on the adrenergic nerve endings in the human saphenous vein. By contrast, the human vein also contains postjunctional inhibitory muscarinic receptors

  5. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  6. Can joint sound assess soft and hard endpoints of the Lachman test?: A preliminary study.

    Science.gov (United States)

    Hattori, Koji; Ogawa, Munehiro; Tanaka, Kazunori; Matsuya, Ayako; Uematsu, Kota; Tanaka, Yasuhito

    2016-05-12

    The Lachman test is considered to be a reliable physical examination for anterior cruciate ligament (ACL) injury. Patients with a damaged ACL demonstrate a soft endpoint feeling. However, examiners judge the soft and hard endpoints subjectively. The purpose of our study was to confirm objective performance of the Lachman test using joint auscultation. Human and porcine knee joints were examined. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of Lachman sound, the peak sound as the maximum relative amplitude (acoustic pressure) and its frequency were used. The mean Lachman peak sound for healthy volunteer knees was 86.9 ± 12.9 Hz in frequency and -40 ± 2.5 dB in acoustic pressure. The mean Lachman peak sound for intact porcine knees was 84.1 ± 9.4 Hz and -40.5 ± 1.7 dB. Porcine knees with ACL deficiency had a soft endpoint feeling during the Lachman test. The Lachman peak sounds of porcine knees with ACL deficiency were dispersed into four distinct groups, with center frequencies of around 40, 160, 450, and 1600. The Lachman peak sound was capable of assessing soft and hard endpoints of the Lachman test objectively.

  7. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  8. On sound and silence : Neurophysiological and behavioral consequences of acoustic trauma

    NARCIS (Netherlands)

    Heeringa, Amarins

    2015-01-01

    Next to elevated hearing thresholds, noise exposure can also cause ringing in the ears (tinnitus) and hyperacusis, a condition in which normal sounds are being perceived as too loud. At present there are no treatments available that consistently cure tinnitus and hyperacusis, partly because the

  9. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  10. Humans mimicking animals: A cortical hierarchy for human vocal communication sounds

    Science.gov (United States)

    Talkington, William J.; Rapuano, Kristina M.; Hitt, Laura; Frum, Chris A.; Lewis, James W.

    2012-01-01

    Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STS) putatively represent homologous voice-sensitive areas of cortex. However, STS regions have recently been reported to represent auditory experience or “expertise” in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations – human-mimicked versions of animal vocalizations – we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. These results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages prior to the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins. PMID:22674283

  11. Real-Time Detection of Important Sounds with a Wearable Vibration Based Device for Hearing-Impaired People

    Directory of Open Access Journals (Sweden)

    Mete Yağanoğlu

    2018-04-01

    Full Text Available Hearing-impaired people do not hear indoor and outdoor environment sounds, which are important for them both at home and outside. By means of a wearable device that we have developed, a hearing-impaired person will be informed of important sounds through vibrations, thereby understanding what kind of sound it is. Our system, which operates in real time, can achieve a success rate of 98% when estimating a door bell ringing sound, 99% success identifying an alarm sound, 99% success identifying a phone ringing, 91% success identifying honking, 93% success identifying brake sounds, 96% success identifying dog sounds, 97% success identifying human voice, and 96% success identifying other sounds using the audio fingerprint method. Audio fingerprint is a brief summary of an audio file, perceptively summarizing a piece of audio content. In this study, our wearable device is tested 100 times a day for 100 days on five deaf persons and 50 persons with normal hearing whose ears were covered by earphones that provided wind sounds. This study aims to improve the quality of life of deaf persons, and provide them a more prosperous life. In the questionnaire performed, deaf people rate the clarity of the system at 90%, usefulness at 97%, and the likelihood of using this device again at 100%.

  12. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes

    OpenAIRE

    Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, ...

  13. Online listening tests on sound insulation of walls

    DEFF Research Database (Denmark)

    Pedersen, Torben Holm; Antunes, Sonia; Rasmussen, Birgit

    2012-01-01

    As part of the COST Action TU0901 WG 2 activities a listening test was made on the annoyance potential of airborne noise from neighbours heard through walls. 22 assessors from 11 countries rated six simulated walls with four types of neighbour noise online at the assessor’s premises using the ISO...

  14. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  15. EPA Townetting CTD casts - Evaluating the ecological health of Puget Sound's pelagic foodweb

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — To evaluate effects of human influence on the health of Puget Sound's pelagic ecosystems, we propose a sampling program across multiple oceanographic basins...

  16. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  17. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  18. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  19. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  20. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  1. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  2. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  3. Artifact rejection of distortion product otoacoustic emissions measured after sound exposure

    DEFF Research Database (Denmark)

    Reuter, Karen; Ordoñez, Rodrigo Pizarro; de Toro, Miguel Angel Aranda

    2007-01-01

    In a previous study [3] distortion product otoacoustic emissions (DPOAEs) were measured both before and after a moderate sound exposure, which caused a reduction of DPOAE levels. After the exposure DPOAEs had often levels below the noise floor. In the present paper it is discussed, whether...

  4. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  5. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    Directory of Open Access Journals (Sweden)

    Ji-Ho Chang

    2017-03-01

    Full Text Available This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive to room reflections and the amplitude decay than the spatial error, which is likely to agree better with the human perception of source localization.

  6. Second sound in a two-dimensional Bose gas: From the weakly to the strongly interacting regime

    Science.gov (United States)

    Ota, Miki; Stringari, Sandro

    2018-03-01

    Using Landau's theory of two-fluid hydrodynamics, we investigate first and second sounds propagating in a two-dimensional (2D) Bose gas. We study the temperature and interaction dependence of both sound modes and show that their behavior exhibits a deep qualitative change as the gas evolves from the weakly interacting to the strongly interacting regime. Special emphasis is placed on the jump of both sounds at the Berezinskii-Kosterlitz-Thouless transition, caused by the discontinuity of the superfluid density. We find that the excitation of second sound through a density perturbation becomes weaker and weaker as the interaction strength increases as a consequence of the decrease in the thermal expansion coefficient. Our results could be relevant for future experiments on the propagation of sound on the Bose-Einstein condensate (BEC) side of the BCS-BEC crossover of a 2D superfluid Fermi gas.

  7. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  8. Causes and Consequences of Sensory Hair Cell Damage and Recovery in Fishes.

    Science.gov (United States)

    Smith, Michael E; Monroe, J David

    2016-01-01

    Sensory hair cells are the mechanotransductive receptors that detect gravity, sound, and vibration in all vertebrates. Damage to these sensitive receptors often results in deficits in vestibular function and hearing. There are currently two main reasons for studying the process of hair cell loss in fishes. First, fishes, like other non-mammalian vertebrates, have the ability to regenerate hair cells that have been damaged or lost via exposure to ototoxic chemicals or acoustic overstimulation. Thus, they are used as a biomedical model to understand the process of hair cell death and regeneration and find therapeutics that treat or prevent human hearing loss. Secondly, scientists and governmental natural resource managers are concerned about the potential effects of intense anthropogenic sounds on aquatic organisms, including fishes. Dr. Arthur N. Popper and his students, postdocs and research associates have performed pioneering experiments in both of these lines of fish hearing research. This review will discuss the current knowledge regarding the causes and consequences of both lateral line and inner ear hair cell damage in teleost fishes.

  9. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  10. Aerodynamic sound of flow past an airfoil

    Science.gov (United States)

    Wang, Meng

    1995-01-01

    The long term objective of this project is to develop a computational method for predicting the noise of turbulence-airfoil interactions, particularly at the trailing edge. We seek to obtain the energy-containing features of the turbulent boundary layers and the near-wake using Navier-Stokes Simulation (LES or DNS), and then to calculate the far-field acoustic characteristics by means of acoustic analogy theories, using the simulation data as acoustic source functions. Two distinct types of noise can be emitted from airfoil trailing edges. The first, a tonal or narrowband sound caused by vortex shedding, is normally associated with blunt trailing edges, high angles of attack, or laminar flow airfoils. The second source is of broadband nature arising from the aeroacoustic scattering of turbulent eddies by the trailing edge. Due to its importance to airframe noise, rotor and propeller noise, etc., trailing edge noise has been the subject of extensive theoretical (e.g. Crighton & Leppington 1971; Howe 1978) as well as experimental investigations (e.g. Brooks & Hodgson 1981; Blake & Gershfeld 1988). A number of challenges exist concerning acoustic analogy based noise computations. These include the elimination of spurious sound caused by vortices crossing permeable computational boundaries in the wake, the treatment of noncompact source regions, and the accurate description of wave reflection by the solid surface and scattering near the edge. In addition, accurate turbulence statistics in the flow field are required for the evaluation of acoustic source functions. Major efforts to date have been focused on the first two challenges. To this end, a paradigm problem of laminar vortex shedding, generated by a two dimensional, uniform stream past a NACA0012 airfoil, is used to address the relevant numerical issues. Under the low Mach number approximation, the near-field flow quantities are obtained by solving the incompressible Navier-Stokes equations numerically at chord

  11. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  12. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  13. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  14. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  15. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    DEFF Research Database (Denmark)

    Chang, Ji-ho; Jeong, Cheol-Ho

    2017-01-01

    This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two...... conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due...... to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive...

  16. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  17. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  18. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  19. Does it matter if people think climate change is human caused?

    Directory of Open Access Journals (Sweden)

    Joel Hartter

    2018-04-01

    Full Text Available There is a growing consensus that climate is changing, but beliefs about the causal factors vary widely among the general public. Current research shows that such causal beliefs are strongly influenced by cultural, political, and identity-driven views. We examined the influence that local perceptions have on the acceptance of basic facts about climate change. We also examined the connection to wildfire by local people. Two recent telephone surveys found that 37% (in 2011 and 46% (in 2014 of eastern Oregon (USA respondents accept the scientific consensus that human activities are now changing the climate. Although most do not agree with that consensus, large majorities (85–86% do agree that climate is changing, whether by natural or human causes. Acceptance of anthropogenic climate change generally divides along political party lines, but acceptance of climate change more generally, and concerns about wildfire, transcend political divisions. Support for active forest management to reduce wildfire risks is strong in this region, and restoration treatments could be critical to the resilience of both communities and ecosystems. Although these immediate steps involve adaptations to a changing climate, they can be motivated without necessarily invoking human-caused climate change, a divisive concept among local landowners. Keywords: Climate change, Inland West, Public perception, Telephone survey, Wildfire, Working landscapes

  20. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    Science.gov (United States)

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  1. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  2. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  3. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  4. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  5. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  6. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  7. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  9. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  10. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  11. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  12. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  13. EPA2011 Microbial & nutrient database - Evaluating the ecological health of Puget Sound's pelagic foodweb

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — To evaluate effects of human influence on the health of Puget Sound's pelagic ecosystems, we propose a sampling program across multiple oceanographic basins...

  14. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  15. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  16. Immersive Environments: Using Flow and Sound to Blur Inhabitant and Surroundings

    Science.gov (United States)

    Laverty, Luke

    Following in the footsteps of motif-reviving, aesthetically-focused Postmodern and deconstructivist architecture, purely computer-generated formalist contemporary architecture (i.e. blobitecture) has been reduced to vast, empty sculptural, and therefore, purely ocularcentric gestures for their own sake. Taking precedent over the deliberate relation to the people inhabiting them beyond scaleless visual stimulation, the forms become separated from and hostile toward their inhabitants; a boundary appears. This thesis calls for a reintroduction of human-centered design beyond Modern functionalism and ergonomics and Postmodern form and metaphor into architecture by exploring ecological psychology (specifically how one becomes attached to objects) and phenomenology (specifically sound) in an attempt to reach a contemporary human scale using the technology of today: the physiological mind. Psychologist Dr. Mihaly Csikszentmihalyi's concept of flow---when one becomes so mentally immersed within the current activity and immediate surroundings that the boundary between inhabitant and environment becomes transparent through a form of trance---is the embodiment of this thesis' goal, but it is limited to only specific moments throughout the day and typically studied without regard to the environment. Physiologically, the area within the brain---the medial prefrontal cortex---stimulated during flow experiences is also stimulated by the synthesis of sound, memory, and emotion. By exploiting sound (a sense not typically focused on within phenomenology) as a form of constant nuance within the everyday productive dissonance, the engagement and complete concentration on one's own interpretation of this sensory input affords flow experiences and, therefore, a blurred boundary with one's environment. This thesis aims to answer the question: How does the built environment embody flow? The above concept will be illustrated within a ubiquitous building type---the everyday housing tower

  17. Instrument Identification in Polyphonic Music: Feature Weighting to Minimize Influence of Sound Overlaps

    Directory of Open Access Journals (Sweden)

    Goto Masataka

    2007-01-01

    Full Text Available We provide a new solution to the problem of feature variations caused by the overlapping of sounds in instrument identification in polyphonic music. When multiple instruments simultaneously play, partials (harmonic components of their sounds overlap and interfere, which makes the acoustic features different from those of monophonic sounds. To cope with this, we weight features based on how much they are affected by overlapping. First, we quantitatively evaluate the influence of overlapping on each feature as the ratio of the within-class variance to the between-class variance in the distribution of training data obtained from polyphonic sounds. Then, we generate feature axes using a weighted mixture that minimizes the influence via linear discriminant analysis. In addition, we improve instrument identification using musical context. Experimental results showed that the recognition rates using both feature weighting and musical context were 84.1 for duo, 77.6 for trio, and 72.3 for quartet; those without using either were 53.4, 49.6, and 46.5 , respectively.

  18. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  19. [Drivers of human-caused fire occurrence and its variation trend under climate change in the Great Xing'an Mountains, Northeast China].

    Science.gov (United States)

    Li, Shun; Wu, Zhi Wei; Liang, Yu; He, Hong Shi

    2017-01-01

    The Great Xing'an Mountains are an important boreal forest region in China with high frequency of fire occurrences. With climate change, this region may have a substantial change in fire frequency. Building the relationship between spatial pattern of human-caused fire occurrence and its influencing factors, and predicting the spatial patterns of human-caused fires under climate change scenarios are important for fire management and carbon balance in boreal forests. We employed a spatial point pattern model to explore the relationship between the spatial pattern of human-caused fire occurrence and its influencing factors based on a database of historical fire records (1967-2006) in the Great Xing'an Mountains. The fire occurrence time was used as dependent variable. Nine abiotic (annual temperature and precipitation, elevation, aspect, and slope), biotic (vegetation type), and human factors (distance to the nearest road, road density, and distance to the nearest settlement) were selected as explanatory variables. We substituted the climate scenario data (RCP 2.6 and RCP 8.5) for the current climate data to predict the future spatial patterns of human-caused fire occurrence in 2050. Our results showed that the point pattern progress (PPP) model was an effective tool to predict the future relationship between fire occurrence and its spatial covariates. The climatic variables might significantly affect human-caused fire occurrence, while vegetation type, elevation and human variables were important predictors of human-caused fire occurrence. The human-caused fire occurrence probability was expected to increase in the south of the area, and the north and the area along the main roads would also become areas with high human-caused fire occurrence. The human-caused fire occurrence would increase by 72.2% under the RCP 2.6 scenario and by 166.7% under the RCP 8.5 scenario in 2050. Under climate change scenarios, the spatial patterns of human-caused fires were mainly

  20. XRD and FTIR crystallinity indices in sound human tooth enamel and synthetic hydroxyapatite.

    Science.gov (United States)

    Reyes-Gasga, José; Martínez-Piñeiro, Esmeralda L; Rodríguez-Álvarez, Galois; Tiznado-Orozco, Gaby E; García-García, Ramiro; Brès, Etienne F

    2013-12-01

    The crystallinity index (CI) is a measure of the percentage of crystalline material in a given sample and it is also correlated to the degree of order within the crystals. In the literature two ways are reported to measure the CI: X-ray diffraction and infrared spectroscopy. Although the CI determined by these techniques has been adopted in the field of archeology as a structural order measure in the bone with the idea that it can help e.g. in the sequencing of the bones in chronological and/or stratigraphic order, some debate remains about the reliability of the CI values. To investigate similarities and differences between the two techniques, the CI of sound human tooth enamel and synthetic hydroxyapatite (HAP) was measured in this work by X-ray diffraction (XRD) and Fourier Transform Infrared spectroscopy (FTIR), at room temperature and after heat treatment. Although the (CI)XRD index is related to the crystal structure of the samples and the (CI)FTIR index is related to the vibration modes of the molecular bonds, both indices showed similar qualitative behavior for heat-treated samples. At room temperature, the (CI)XRD value indicated that enamel is more crystalline than synthetic HAP, while (CI)FTIR indicated the opposite. Scanning (SEM) and transmission (TEM) images were also used to corroborate the measured CI values. © 2013.