WorldWideScience

Sample records for sound producing systems

  1. Pacific and Atlantic herring produce burst pulse sounds.

    Science.gov (United States)

    Wilson, Ben; Batty, Robert S; Dill, Lawrence M

    2004-02-07

    The commercial importance of Pacific and Atlantic herring (Clupea pallasii and Clupea harengus) has ensured that much of their biology has received attention. However, their sound production remains poorly studied. We describe the sounds made by captive wild-caught herring. Pacific herring produce distinctive bursts of pulses, termed Fast Repetitive Tick (FRT) sounds. These trains of broadband pulses (1.7-22 kHz) lasted between 0.6 s and 7.6 s. Most were produced at night; feeding regime did not affect their frequency, and fish produced FRT sounds without direct access to the air. Digestive gas or gulped air transfer to the swim bladder, therefore, do not appear to be responsible for FRT sound generation. Atlantic herring also produce FRT sounds, and video analysis showed an association with bubble expulsion from the anal duct region (i.e. from the gut or swim bladder). To the best of the authors' knowledge, sound production by such means has not previously been described. The function(s) of these sounds are unknown, but as the per capita rates of sound production by fish at higher densities were greater, social mediation appears likely. These sounds may have consequences for our understanding of herring behaviour and the effects of noise pollution.

  2. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  3. Speed of sound in biodiesel produced by low power ultrasound

    Science.gov (United States)

    Oliveira, P. A.; Silva, R. M. B.; Morais, G. C.; Alvarenga, A. V.; Costa-Felix, R. P. B.

    2018-03-01

    The quality control of the biodiesel produced is an important issue to be addressed for every manufacturer or retailer. The speed of sound is a property that has an influence on the quality of the produced fuel. This work presents the evaluation about the speed of sound in biodiesel produced with the aid of low power ultrasound in the frequencies of 1 MHz and 3 MHz. The speed of sound was measured by pulse-echo technique. The ultrasonic frequency used during reaction affects the speed of sound in biodiesel. The larger expanded uncertainty for adjusted curve was 4.9 m.s-1.

  4. Underwater sound produced by individual drop impacts and rainfall

    DEFF Research Database (Denmark)

    Pumphrey, Hugh C.; Crum, L. A.; Jensen, Leif Bjørnø

    1989-01-01

    An experimental study of the underwater sound produced by water drop impacts on the surface is described. It is found that sound may be produced in two ways: first when the drop strikes the surface and, second, when a bubble is created in the water. The first process occurs for every drop...

  5. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  6. Observations of the sound producing organs in achelate lobster larvae

    Directory of Open Access Journals (Sweden)

    John A. Fornshell

    2017-06-01

    Full Text Available The Achelata, lobsters lacking claws and having a phyllosoma larva, are divided into two families, the Palinuridae or spiny lobsters and the Scyllaridae or slipper lobsters. Within the Palinuridae adults of two groups were identified by Parker (1884, the Stridentesthat are capable of producing sounds, and the Silentesthat are not known to produce sounds. The Stridentes employ a file-like structure on the dorsal surface of the cephalon and a plectrum consisting of a series of ridges on the proximal segment of the second antenna to produce their sounds. All species of Achelata hatch as an unpigmented thin phyllosoma larva. The phyllosoma larva of the Stridentes have a presumptive file-like structure on the dorsal cephalon. A similar file-like structure is found on the cephalon of one species of Silentes, Palinurellus wienckki, and some but not all of the phyllosoma larvae of the Scyllaridae. No presumptive plectrum is found on the second antenna of any of the phyllosoma larvae. Presence of a presumptive file-like structure on phyllosoma larvae of Silentes and Scyllaridae suggests that the ability to produce sounds may have been lost secondarily in the Silentes and Scyllaridae.

  7. Analysis of failure of voice production by a sound-producing voice prosthesis

    NARCIS (Netherlands)

    van der Torn, M.; van Gogh, C.D.L.; Verdonck-de Leeuw, I M; Festen, J.M.; Mahieu, H.F.

    OBJECTIVE: To analyse the cause of failing voice production by a sound-producing voice prosthesis (SPVP). METHODS: The functioning of a prototype SPVP is described in a female laryngectomee before and after its sound-producing mechanism was impeded by tracheal phlegm. This assessment included:

  8. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  9. Analysis of Damped Mass-Spring Systems for Sound Synthesis

    Directory of Open Access Journals (Sweden)

    Don Morgan

    2009-01-01

    Full Text Available There are many ways of synthesizing sound on a computer. The method that we consider, called a mass-spring system, synthesizes sound by simulating the vibrations of a network of interconnected masses, springs, and dampers. Numerical methods are required to approximate the differential equation of a mass-spring system. The standard numerical method used in implementing mass-spring systems for use in sound synthesis is the symplectic Euler method. Implementers and users of mass-spring systems should be aware of the limitations of the numerical methods used; in particular we are interested in the stability and accuracy of the numerical methods used. We present an analysis of the symplectic Euler method that shows the conditions under which the method is stable and the accuracy of the decay rates and frequencies of the sounds produced.

  10. Producing of Impedance Tube for Measurement of Acoustic Absorption Coefficient of Some Sound Absorber Materials

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi

    2008-04-01

    Full Text Available Introduction & Objective: Noise is one of the most important harmful agents in work environment. In spit of industrial improvements, exposure with over permissible limit of noise is counted as one of the health complication of workers. In Iran, do not exact information of the absorption coefficient of acoustic materials. Iranian manufacturer have not laboratory for measured of sound absorbance of their products, therefore using of sound absorber is limited for noise control in industrial and non industrial constructions. The goal of this study was to design an impedance tube based on pressure method for measurement of the sound absorption coefficient of acoustic materials.Materials & Methods: In this study designing of measuring system and method of calculation of sound absorption based on a available equipment and relatively easy for measurement of the sound absorption coefficient related to ISO10534-1 was performed. Measuring system consist of heavy asbestos tube, a pure tone sound generator, calibrated sound level meter for measuring of some commonly of sound absorber materials was used. Results: In this study sound absorption coefficient of 23 types of available acoustic material in Iran was tested. Reliability of results by three repeat of measurement was tested. Results showed that the standard deviation of sound absorption coefficient of study materials was smaller than .Conclusion: The present study performed a necessary technology of designing and producing of impedance tube for determining of acoustical materials absorption coefficient in Iran.

  11. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  12. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  13. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  14. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    Science.gov (United States)

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    system should also, theoretically, reduce risk of infection. Based on our research we concluded that while automatic assessment of bowel sounds is feasible over an extended period, there will be times when analysis is not possible. One limitation is noise interference. Our larger goals include producing a meaningful vital sign to characterize bowel sounds that can be produced in real-time, as well as providing automatic control for patient feeding pumps.

  15. Improving Sound Systems by Electrical Means

    OpenAIRE

    Schneider, Henrik; Andersen, Michael A. E.; Knott, Arnold

    2015-01-01

    The availability and flexibility of audio services on various digital platforms have created a high demand for a large range of sound systems. The fundamental components of sound systems such as docking stations, sound bars and wireless mobile speakers consists of a power supply, amplifiers and transducers. Due to historical reasons the design of each of these components are commonly handled separately which are indeed limiting the full performance potential of such systems. To state some exa...

  16. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  17. A system for heart sounds classification.

    Directory of Open Access Journals (Sweden)

    Grzegorz Redlarski

    Full Text Available The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases - one of the major causes of death around the globe - a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability.

  18. Sound produced by an oscillating arc in a high-pressure gas

    Science.gov (United States)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  19. A data-assimilative ocean forecasting system for the Prince William sound and an evaluation of its performance during sound Predictions 2009

    Science.gov (United States)

    Farrara, John D.; Chao, Yi; Li, Zhijin; Wang, Xiaochun; Jin, Xin; Zhang, Hongchun; Li, Peggy; Vu, Quoc; Olsson, Peter Q.; Schoch, G. Carl; Halverson, Mark; Moline, Mark A.; Ohlmann, Carter; Johnson, Mark; McWilliams, James C.; Colas, Francois A.

    2013-07-01

    The development and implementation of a three-dimensional ocean modeling system for the Prince William Sound (PWS) is described. The system consists of a regional ocean model component (ROMS) forced by output from a regional atmospheric model component (the Weather Research and Forecasting Model, WRF). The ROMS ocean model component has a horizontal resolution of 1km within PWS and utilizes a recently-developed multi-scale 3DVAR data assimilation methodology along with freshwater runoff from land obtained via real-time execution of a digital elevation model. During the Sound Predictions Field Experiment (July 19-August 3, 2009) the system was run in real-time to support operations and incorporated all available real-time streams of data. Nowcasts were produced every 6h and a 48-h forecast was performed once a day. In addition, a sixteen-member ensemble of forecasts was executed on most days. All results were published at a web portal (http://ourocean.jpl.nasa.gov/PWS) in real time to support decision making.The performance of the system during Sound Predictions 2009 is evaluated. The ROMS results are first compared with the assimilated data as a consistency check. RMS differences of about 0.7°C were found between the ROMS temperatures and the observed vertical profiles of temperature that are assimilated. The ROMS salinities show greater discrepancies, tending to be too salty near the surface. The overall circulation patterns observed throughout the Sound are qualitatively reproduced, including the following evolution in time. During the first week of the experiment, the weather was quite stormy with strong southeasterly winds. This resulted in strong north to northwestward surface flow in much of the central PWS. Both the observed drifter trajectories and the ROMS nowcasts showed strong surface inflow into the Sound through the Hinchinbrook Entrance and strong generally northward to northwestward flow in the central Sound that was exiting through the Knight

  20. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  1. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  2. [A focused sound field measurement system by LabVIEW].

    Science.gov (United States)

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  3. The dispersion-focalization theory of sound systems

    Science.gov (United States)

    Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie

    2005-04-01

    The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.

  4. Portable system for auscultation and lung sound analysis.

    Science.gov (United States)

    Nabiev, Rustam; Glazova, Anna; Olyinik, Valery; Makarenkova, Anastasiia; Makarenkov, Anatolii; Rakhimov, Abdulvosid; Felländer-Tsai, Li

    2014-01-01

    A portable system for auscultation and lung sound analysis has been developed, including the original electronic stethoscope coupled with mobile devices and special algorithms for the automated analysis of pulmonary sound signals. It's planned that the developed system will be used for monitoring of health status of patients with various pulmonary diseases.

  5. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  6. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  7. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  8. Sound transmission reduction with intelligent panel systems

    Science.gov (United States)

    Fuller, Chris R.; Clark, Robert L.

    1992-01-01

    Experimental and theoretical investigations are performed of the use of intelligent panel systems to control the sound transmission and radiation. An intelligent structure is defined as a structural system with integrated actuators and sensors under the guidance of an adaptive, learning type controller. The system configuration is based on the Active Structural Acoustic Control (ASAC) concept where control inputs are applied directly to the structure to minimize an error quantity related to the radiated sound field. In this case multiple piezoelectric elements are employed as sensors. The importance of optimal shape and location is demonstrated to be of the same order of influence as increasing the number of channels of control.

  9. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  10. Actual problems of geo-environment and sounding systems

    OpenAIRE

    Burakhovich, T. K.; Kobolev, V. P.

    2017-01-01

    The results of the conference "Actual problems of geo-environment and sounding systems", dedicated to the memory of outstanding scientists - Vladimir Nikolaevich Shuman and Sergei Nikolaevich Kulik, who made a great contribution to the theory, methodology and geological interpretation of deep electromagnetic electromagnetic sounding of the Earth.

  11. Air conducted and body conducted sound produced by own voice

    DEFF Research Database (Denmark)

    Hansen, Mie Østergaard

    1998-01-01

    When we speak, sound reaches our ears both through the air, from the mouth to ear, and through our body, as vibrations. The ratio between the air borne and body conducted sound has been studied in a pilot experiment where the air borne sound was eliminated by isolating the ear with a large...... attenuation box. The ratio was found to lie between -15 dB to -7 dB, below 1 kHz, comparable with theoretical estimations. This work is part of a broader study of the occlusion effect and the results provide important input data for modelling the sound pressure change between an open and an occluded ear canal....

  12. Controlled Acoustic Bass System (CABS) A Method to Achieve Uniform Sound Field Distribution at Low Frequencies in Rectangular Rooms

    DEFF Research Database (Denmark)

    Celestinos, Adrian; Nielsen, Sofus Birkedal

    2008-01-01

    The sound field produced by loudspeakers at low frequencies in small- and medium-size rectangular listening rooms is highly nonuniform due to the multiple reflections and diffractions of sound on the walls and different objects in the room. A new method, called controlled acoustic bass system (CA......-frequency range. CABS has been simulated and measured in two different standard listening rooms with satisfactory results....

  13. Improving Sound Systems by Electrical Means

    DEFF Research Database (Denmark)

    Schneider, Henrik

    to intelligent control and protection functionality and so on. In this work different strategies towards improvements of sound systems by electrical means was investigated considering the interfaces between each component and the performance of the full system. The strategies can be categorized by improvements...... reduction in the best case. This technology is very promising since it compensates for most distortion mechanisms of the transducer such as non-linearities, production variation, wear-n-tear, temperature changes and so on. Furthermore the accelerometer output can be used for protection purposes. The only...... of the bended copper foils to optimize the DC resistance. The DC resistance was reduced by 30 % compared to the starting point for a 10 turn toroidal inductor using this method. The combined work indicate that large sound system improvements are in reach by use of electrical means. Innovative solutions have...

  14. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  15. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  16. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  17. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  18. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  19. Imagination, Perceptual Engagement and Sound Mediation. Thinking Technologically-Produced Sound Through Simondon's Concept of the Image

    NARCIS (Netherlands)

    Paiuk, G.

    2018-01-01

    Applying French philosopher Gilbert Simondon’s concept of image to the domain of the sonorous, this article aims to tackle how imagination is constitutional in our grasp of sound, and how this grasp is informed by the protocols and affordances of technological tools of sound reproduction and

  20. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  1. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection

    Directory of Open Access Journals (Sweden)

    Shih-Hong Li

    2017-01-01

    Full Text Available In the clinic, the wheezing sound is usually considered as an indicator symptom to reflect the degree of airway obstruction. The auscultation approach is the most common way to diagnose wheezing sounds, but it subjectively depends on the experience of the physician. Several previous studies attempted to extract the features of breathing sounds to detect wheezing sounds automatically. However, there is still a lack of suitable monitoring systems for real-time wheeze detection in daily life. In this study, a wearable and wireless breathing sound monitoring system for real-time wheeze detection was proposed. Moreover, a breathing sounds analysis algorithm was designed to continuously extract and analyze the features of breathing sounds to provide the objectively quantitative information of breathing sounds to professional physicians. Here, normalized spectral integration (NSI was also designed and applied in wheeze detection. The proposed algorithm required only short-term data of breathing sounds and lower computational complexity to perform real-time wheeze detection, and is suitable to be implemented in a commercial portable device, which contains relatively low computing power and memory. From the experimental results, the proposed system could provide good performance on wheeze detection exactly and might be a useful assisting tool for analysis of breathing sounds in clinical diagnosis.

  2. Seafloor environments in the Long Island Sound estuarine system

    Science.gov (United States)

    Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.

    1999-01-01

    Four categories of modern seafloor sedimentary environments have been identified and mapped across the large, glaciated, topographically complex Long Island Sound estuary by means of an extensive regional set of sidescan sonographs, bottom samples, and video-camera observations and supplemental marine-geologic and modeled physical-oceanographic data. (1) Environments of erosion or nondeposition contain sediments which range from boulder fields to gravelly coarse-to-medium sands and appear on the sonographs either as patterns with isolated reflections (caused by outcrops of glacial drift and bedrock) or as patterns of strong backscatter (caused by coarse lag deposits). Areas of erosion or nondeposition were found across the rugged seafloor at the eastern entrance of the Sound and atop bathymetric highs and within constricted depressions in other parts of the basin. (2) Environments of bedload transport contain mostly coarse-to-fine sand with only small amounts of mud and are depicted by sonograph patterns of sand ribbons and sand waves. Areas of bedload transport were found primarily in the eastern Sound where bottom currents have sculptured the surface of a Holocene marine delta and are moving these sediments toward the WSW into the estuary. (3) Environments of sediment sorting and reworking comprise variable amounts of fine sand and mud and are characterized either by patterns of moderate backscatter or by patterns with patches of moderate-to-weak backscatter that reflect a combination of erosion and deposition. Areas of sediment sorting and reworking were found around the periphery of the zone of bedload transport in the eastern Sound and along the southern nearshore margin. They also are located atop low knolls, on the flanks of shoal complexes, and within segments of the axial depression in the western Sound. (4) Environments of deposition are blanketed by muds and muddy fine sands that produce patterns of uniformly weak backscatter. Depositional areas occupy

  3. 40 CFR 205.54-2 - Sound data acquisition system.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Sound data acquisition system. 205.54-2 Section 205.54-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Medium and Heavy Trucks § 205.54-2 Sound...

  4. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  5. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  6. Low frequency sound field enhancement system for rectangular rooms, using multiple loudspeakers

    DEFF Research Database (Denmark)

    Celestinos, Adrian

    2007-01-01

    The scope of this PhD dissertation is within the performance of loudspeakers in rooms at low frequencies. The research concentrates on the improvement of the sound level distribution in rooms produced by loudspeakers at low frequencies. The work focuses on seeing the problem acoustically...... and solving it in the time domain. Loudspeakers are the last link in the sound reproduction chain, and they are typically placed in small or medium size rooms. When low frequency sound is radiated by a loudspeaker the sound level distribution along the room presents large deviations. This is due...... to the multiple reflection of sound at the rigid walls of the room. This may cause level differences of up to 20 dB in the room. Some of these deviations are associated with the standing waves, resonances or anti resonances of the room. The understanding of the problem is accomplished by analyzing the behavior...

  7. Spindle vibration and sound field measurement using optical vibrometry

    OpenAIRE

    Tatar, Kourosh

    2008-01-01

    Mechanical systems often produce a considerable amount of vibration and noise. To be able to obtain a complete picture of the dynamic behaviour of these systems, vibration and sound measurements are of significant importance. Optical metrology is well-suited for non-intrusive measurements on complex objects. The development and the use of remote non-contact vibration measurement methods for spindles are described and vibration measurements on thin- walled structures and sound field measuremen...

  8. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  9. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  10. Investigating the relationship between pressure force and acoustic waveform in footstep sounds

    DEFF Research Database (Denmark)

    Grani, Francesco; Serafin, Stefania; Götzen, Amalia De

    2013-01-01

    In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair o...... of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.......In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair...

  11. Human-inspired sound environment recognition system for assistive vehicles

    Science.gov (United States)

    González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando

    2015-02-01

    Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance

  12. The Use of Sound Absorbing Shading Systems for the Attenuation of Noise on Building Façades. An Experimental Investigation

    Directory of Open Access Journals (Sweden)

    Nicolò Zuccherini Martello

    2015-12-01

    Full Text Available The problem of solar irradiation in building façades with large windows is often solved with the use of external shading devices, such as brise-soleil systems, but their potential acoustic effects on building façades are usually neglected. The purpose of this work is a preliminary consideration of the acoustic behaviour of brise-soleil systems and, furthermore, the evaluation of the possibility to improve their performances, in terms of Sound Pressure Level (SPL abatement over glass surfaces. The paper reports the results of a study on two portions of the same office building, with shading devices installed in front of large windows. Both airborne sound insulation measurements and SPL measurements over the glass surfaces of the windows were carried out to compare different situations, with or without louvers, and with sound absorbing experimental louvers as well. Results show that the louvers' presence can produce an increase in the SPL over the glass surface as a consequence of the reflection of the sound. Results further show that sound absorbing louvers improve the noise protection of the system, in terms of the SPL reduction, over glass surfaces, cancelling out the negative effect of the standard shading devices.

  13. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    Science.gov (United States)

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  14. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  15. Wearable Eating Habit Sensing System Using Internal Body Sound

    Science.gov (United States)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  16. Interactive footstep sounds modulate the perceptual-motor aftereffect of treadmill walking

    DEFF Research Database (Denmark)

    Turchet, Luca; Camponogara, Ivan; Cesari, Paola

    2015-01-01

    and snow were provided by means of a system composed of headphones and shoes augmented with sensors. In a control condition, participants could hear their actual footstep sounds. Results showed an overall enhancement of the forward drift after treadmill walking independent of the sound perceived, while...... walking. Behavioral results confirmed those of a perceptual questionnaire, which showed that the snow sound was effective in producing strong pseudo-haptic illusions. Our results provide evidence that the walking in place aftereffect results from a recalibration of haptic, visuo-motor but also sound...

  17. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  18. Analysis of acoustic sound signal for ONB measurement

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, H. I.; Han, K. Y.; Chai, H. T.; Park, C.

    2003-01-01

    The onset of nucleate boiling (ONB) was measured in a test fuel bundle composed of several fuel element simulators (FES) by analysing the aquatic sound signals. In order measure ONBs, a hydrophone, a pre-amplifier, and a data acquisition system to acquire/process the aquatic signal was prepared. The acoustic signal generated in the coolant is converted to the current signal through the microphone. When the signal is analyzed in the frequency domain, each sound signal can be identified according to its origin of sound source. As the power is increased to a certain degree, a nucleate boiling is started. The frequent formation and collapse of the void bubbles produce sound signal. By measuring this sound signal one can pinpoint the ONB. Since the signal characteristics is identical for different mass flow rates, this method can be applicable for ascertaining ONB

  19. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  20. Directional sound beam emission from a configurable compact multi-source system

    KAUST Repository

    Zhao, Jiajun

    2018-01-12

    We propose to achieve efficient emission of highly directional sound beams from multiple monopole sources embedded in a subwavelength enclosure. Without the enclosure, the emitted sound fields have an indistinguishable or omnidirectional radiation directivity in far fields. The strong directivity formed in the presence of the enclosure is attributed to interference of sources under degenerate Mie resonances in the enclosure of anisotropic property. Our numerical simulations of sound emission from the sources demonstrate the radiation of a highly directed sound beam of unidirectional or bidirectional patterns, depending on how the sources are configured inside the enclosure. Our scheme, if achieved, can solve the challenging problem of poor directivity of a subwavelength sound system, and can guide beam forming and collimation by miniaturized devices.

  1. Active low frequency sound field control in a listening room using CABS (Controlled Acoustic Bass System) will also reduce the sound transmitted to neighbour rooms

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2012-01-01

    Sound in rooms and transmission of sound between rooms gives the biggest problems at low frequencies. Rooms with rectangular boundaries have strong resonance frequencies and will give big spatial variations in sound pressure level (SPL) in the source room, and an increase in SPL of 20 dB at a wall...... Bass System) is a time based room correction system for reproduced sound using loudspeakers. The system can remove room modes at low frequencies, by active cancelling the reflection from at the rear wall to a normal stereo setup. Measurements in a source room using CABS and in two neighbour rooms have...... shown a reduction in sound transmission of up to 10 dB at resonance frequencies and a reduction at broadband noise of 3 – 5 dB at frequencies up to 100 Hz. The ideas and understanding of the CABS system will also be given....

  2. System complexity and (im)possible sound changes

    NARCIS (Netherlands)

    Seinhorst, K.T.

    2016-01-01

    In the acquisition of phonological patterns, learners tend to considerably reduce the complexity of their input. This learning bias may also constrain the set of possible sound changes, which might be expected to contain only those changes that do not increase the complexity of the system. However,

  3. Validating a perceptual distraction model using a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two...... sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli...

  4. Stridulatory sound-production and its function in females of the cicada Subpsaltria yangi.

    Directory of Open Access Journals (Sweden)

    Changqing Luo

    Full Text Available Acoustic behavior plays a crucial role in many aspects of cicada biology, such as reproduction and intrasexual competition. Although female sound production has been reported in some cicada species, acoustic behavior of female cicadas has received little attention. In cicada Subpsaltria yangi, the females possess a pair of unusually well-developed stridulatory organs. Here, sound production and its function in females of this remarkable cicada species were investigated. We revealed that the females could produce sounds by stridulatory mechanism during pair formation, and the sounds were able to elicit both acoustic and phonotactic responses from males. In addition, the forewings would strike the body during performing stridulatory sound-producing movements, which generated impact sounds. Acoustic playback experiments indicated that the impact sounds played no role in the behavioral context of pair formation. This study provides the first experimental evidence that females of a cicada species can generate sounds by stridulatory mechanism. We anticipate that our results will promote acoustic studies on females of other cicada species which also possess stridulatory system.

  5. Beliefs in the population about cracking sounds produced during spinal manipulation.

    Science.gov (United States)

    Demoulin, Christophe; Baeri, Damien; Toussaint, Geoffrey; Cagnie, Barbara; Beernaert, Axel; Kaux, Jean-François; Vanderthommen, Marc

    2018-03-01

    To examine beliefs about cracking sounds heard during high-velocity low-amplitude (HVLA) thrust spinal manipulation in individuals with and without personal experience of this technique. We included 100 individuals. Among them, 60 had no history of spinal manipulation, including 40 who were asymptomatic with or without a past history of spinal pain and 20 who had nonspecific spinal pain. The remaining 40 patients had a history of spinal manipulation; among them, 20 were asymptomatic and 20 had spinal pain. Participants attended a one-on-one interview during which they completed a questionnaire about their history of spinal manipulation and their beliefs regarding sounds heard during spinal manipulation. Mean age was 43.5±15.4years. The sounds were ascribed to vertebral repositioning by 49% of participants and to friction between two vertebras by 23% of participants; only 9% of participants correctly ascribed the sound to the formation of a gas bubble in the joint. The sound was mistakenly considered to indicate successful spinal manipulation by 40% of participants. No differences in beliefs were found between the groups with and without a history of spinal manipulation. Certain beliefs have documented adverse effects. This study showed a high prevalence of unfounded beliefs regarding spinal manipulation. These beliefs deserve greater attention from healthcare providers, particularly those who practice spinal manipulation. Copyright © 2017 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  6. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  7. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  8. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  9. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  10. Stochastic Signal Processing for Sound Environment System with Decibel Evaluation and Energy Observation

    Directory of Open Access Journals (Sweden)

    Akira Ikuta

    2014-01-01

    Full Text Available In real sound environment system, a specific signal shows various types of probability distribution, and the observation data are usually contaminated by external noise (e.g., background noise of non-Gaussian distribution type. Furthermore, there potentially exist various nonlinear correlations in addition to the linear correlation between input and output time series. Consequently, often the system input and output relationship in the real phenomenon cannot be represented by a simple model using only the linear correlation and lower order statistics. In this study, complex sound environment systems difficult to analyze by using usual structural method are considered. By introducing an estimation method of the system parameters reflecting correlation information for conditional probability distribution under existence of the external noise, a prediction method of output response probability for sound environment systems is theoretically proposed in a suitable form for the additive property of energy variable and the evaluation in decibel scale. The effectiveness of the proposed stochastic signal processing method is experimentally confirmed by applying it to the observed data in sound environment systems.

  11. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  12. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  13. Stage separation study of Nike-Black Brant V Sounding Rocket System

    Science.gov (United States)

    Ferragut, N. J.

    1976-01-01

    A new Sounding Rocket System has been developed. It consists of a Nike Booster and a Black Brant V Sustainer with slanted fins which extend beyond its nozzle exit plane. A cursory look was taken at different factors which must be considered when studying a passive separation system. That is, one separation system without mechanical constraints in the axial direction and which will allow separation due to drag differential accelerations between the Booster and the Sustainer. The equations of motion were derived for rigid body motions and exact solutions were obtained. The analysis developed could be applied to any other staging problem of a Sounding Rocket System.

  14. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  15. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    Science.gov (United States)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  16. Low frequency sound field control for loudspeakers in rectangular rooms using CABS (Controlled Acoustical Bass System)

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2010-01-01

    Rectangular rooms are the most common shape for sound reproduction, but at low frequencies the reflections from the boundaries of the room cause large spatial variations in the sound pressure level.  Variations up to 30 dB are normal, not only at the room modes, but basically at all frequencies....... As sound propagates in time, it seems natural that the problems can best be analyzed and solved in the time domain. A time based room correction system named CABS (Controlled Acoustical Bass System) has been developed for sound reproduction in rectangular listening rooms. It can control the sound...... sound field in the whole room, and short impulse response.  In a standard listening room (180 m3) only 4 loudspeakers are needed, 2 more than a traditional stereo setup. CABS is controlled by a developed DSP system. The time based approached might help with the understanding of sound field control...

  17. Sound dispersion in a spin-1 Ising system near the second-order phase transition point

    International Nuclear Information System (INIS)

    Erdem, Ryza; Keskin, Mustafa

    2003-01-01

    Sound dispersion relation is derived for a spin-1 Ising system and its behaviour near the second-order phase transition point or the critical point is analyzed. The method used is a combination of molecular field approximation and Onsager theory of irreversible thermodynamics. If we assume a linear coupling of sound wave with the order parameter fluctuations in the system, we find that the dispersion which is the relative sound velocity change with frequency behaves as ω 0 ε 0 , where ω is the sound frequency and ε the temperature distance from the critical point. In the ordered region, one also observes a frequency-dependent velocity or dispersion minimum which is shifted from the corresponding attenuation maxima. These phenomena are in good agreement with the calculations of sound velocity in other magnetic systems such as magnetic metals, magnetic insulators, and magnetic semiconductors

  18. Dynamic Analysis of Sounding Rocket Pneumatic System Revision

    Science.gov (United States)

    Armen, Jerald

    2010-01-01

    The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.

  19. An experiment towards characterizing seahorse sound in a laboratory controlled environment

    Digital Repository Service at National Institute of Oceanography (India)

    Saran, A.K.; Sreepada, R.A.; Chakraborty, B.; Fernandes, W.A.; Srivastava, R.; Kuncolienker, D.S.; Gawde, G.

    There are many reports of sounds produced by Seahorse (Hippocampus), however, little is known about the mechanism of sound production. Here, we investigate sound produced by the seahorse during feeding. We attempt to try to understand and analyze...

  20. Potential use of feebate systems to foster environmentally sound urban waste management

    International Nuclear Information System (INIS)

    Puig-Ventosa, Ignasi

    2004-01-01

    Waste treatment facilities are often shared among different municipalities as a means of managing wastes more efficiently. Usually, management costs are assigned to each municipality depending on the size of the population or total amount of waste produced, regardless of important environmental aspects such as per capita waste generation or achievements in composting or recycling. This paper presents a feebate (fee+rebate) system aimed to foster urban waste reduction and recovery. The proposal suggests that municipalities achieving better results in their waste management performance (from an ecological viewpoint) be recompensated with a rebate obtained from a fee charged to those municipalities that are less environmentally sound. This is a dynamic and flexible instrument that would positively encourage municipalities to reduce waste whilst increasing the recycling

  1. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  2. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  3. Novel sound phenomena in superfluid helium in aerogel and other impure superfluids

    International Nuclear Information System (INIS)

    Brusov, Peter; Brusov, Paul; Lawes, Gavin; Lee, Chong; Matsubara, Akira; Ishikawa, Osamu; Majumdar, Pinaki

    2003-01-01

    During the last decade new techniques for producing impure superfluids with unique properties have been developed. This new class of systems includes superfluid helium confined to aerogel, HeII with different impurities (D 2 , N 2 , Ne, Kr), superfluids in Vycor glasses, and watergel. These systems exhibit very unusual properties including unexpected acoustic features. We discuss the sound properties of these systems and show that sound phenomena in impure superfluids are modified from those in pure superfluids. We calculate the coupling between temperature and pressure oscillations for impure superfluids and for superfluid He in aerogel. We show that the coupling between these two sound modes is governed either by c∂ρ/∂c or σρ a ρ s (for aerogel) rather than thermal expansion coefficient ∂ρ/∂T, which is enormously small in pure superfluids. This replacement plays a fundamental role in all sound phenomena in impure superfluids. It enhances the coupling between the two sound modes that leads to the existence of such phenomena as the slow mode and heat pulse propagation with the velocity of first sound observed in superfluids in aerogel. This means that it is possible to observe in impure superfluids such unusual sound phenomena as slow pressure (density) waves and fast temperature (entropy) waves. The enhancement of the coupling between the two sound modes decreases the threshold values for nonlinear processes as compared to pure superfluids. Sound conversion, which has been observed in pure superfluids only by shock waves should be observed at moderate sound amplitude in impure superfluids. Cerenkov emission of second sound by first sound (which never been observed in pure superfluids) could be observed in impure superfluids

  4. Low frequency sound field control in rectangular listening rooms using CABS (Controlled Acoustic Bass System) will also reduce sound transmission to neighbor rooms

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2011-01-01

    Sound reproduction is often taking place in small and medium sized rectangular rooms. As rectangular rooms have 3 pairs of parallel walls the reflections at especially low frequencies will cause up to 30 dB spatial variations of the sound pressure level in the room. This will take place not only...... at resonance frequencies, but more or less at all frequencies. A time based room correction system named CABS (Controlled Acoustic Bass System) has been developed and is able to create a homogeneous sound field in the whole room at low frequencies by proper placement of multiple loudspeakers. A normal setup...... from the rear wall, and thereby leaving only the plane wave in the room. With a room size of (7.8 x 4.1 x 2.8) m. it is possible to prevent modal frequencies up to 100 Hz. An investigation has shown that the sound transmitted to a neighbour room also will be reduced if CABS is used. The principle...

  5. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  6. How do "mute" cicadas produce their calling songs?

    Directory of Open Access Journals (Sweden)

    Changqing Luo

    Full Text Available Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as "mute". This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the "mute" cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism.

  7. How do "mute" cicadas produce their calling songs?

    Science.gov (United States)

    Luo, Changqing; Wei, Cong; Nansen, Christian

    2015-01-01

    Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as "mute". This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the "mute" cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism.

  8. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  9. Analysis of the HVAC system's sound quality using the design of experiments

    International Nuclear Information System (INIS)

    Park, Sang Gil; Sim, Hyun Jin; Yoon, Ji Hyun; Jeong, Jae Eun; Choi, Byoung Jae; Oh, Jae Eung

    2009-01-01

    Human hearing is very sensitive to sound, so a subjective index of sound quality is required. Each situation of sound evaluation is composed of Sound Quality (SQ) metrics. When substituting the level of one frequency band, we could not see the tendency of substitution at the whole frequency band during SQ evaluation. In this study, the Design of Experiments (DOE) is used to analyze noise from an automotive Heating, Ventilating, and Air Conditioning (HVAC) system. The frequency domain is divided into 12 equal parts, and each level of the domain is given an increase or decrease due to the change in frequency band based on the 'loud' and 'sharp' sound of the SQ analyzed. By using DOE, the number of tests is effectively reduced by the number of experiments, and the main result is a solution at each band. SQ in terms of the 'loud' and 'sharp' sound at each band, the change in band (increase or decrease in sound pressure) or no change in band will have the most effect on the identifiable characteristics of SQ. This will enable us to select the objective frequency band. Through the results obtained, the physical level changes in arbitrary frequency domain sensitivity can be determined

  10. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  11. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  12. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  13. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  14. Design of Efficient Sound Systems for Low Voltage Battery Driven Applications

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Oortgiesen, Rien; Knott, Arnold

    2016-01-01

    The efficiency of portable battery driven sound systems is crucial as it relates to both the playback time and cost of the system. This paper presents design considerations when designing such systems. This include loudspeaker and amplifier design. Using a low resistance voice coil realized...

  15. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  16. Offshore dredger sounds: Source levels, sound maps, and risk assessment

    NARCIS (Netherlands)

    Jong, C.A.F. de; Ainslie, M.A.; Heinis, F.; Janmaat, J.

    2016-01-01

    The underwater sound produced during construction of the Port of Rotterdam harbor extension (Maasvlakte 2) was measured, with emphasis on the contribution of the trailing suction hopper dredgers during their various activities: dredging, transport, and discharge of sediment. Measured source levels

  17. Visualizing Sound Directivity via Smartphone Sensors

    OpenAIRE

    Hawley, Scott H.; McClain Jr, Robert E.

    2017-01-01

    We present a fast, simple method for automated data acquisition and visualization of sound directivity, made convenient and accessible via a smartphone app, "Polar Pattern Plotter." The app synchronizes measurements of sound volume with the phone's angular orientation obtained from either compass, gyroscope or accelerometer sensors and produces a graph and exportable data file. It is generalizable to various sound sources and receivers via the use of an input-jack-adaptor to supplant the smar...

  18. Methodology for designing aircraft having optimal sound signatures

    NARCIS (Netherlands)

    Sahai, A.K.; Simons, D.G.

    2017-01-01

    This paper presents a methodology with which aircraft designs can be modified such that they produce optimal sound signatures on the ground. With optimal sound it is implied in this case sounds that are perceived as less annoying by residents living near airport vicinities. A novel design and

  19. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  20. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  1. Classification of Normal Subjects and Pulmonary Function Disease Patients using Tracheal Respiratory Sound Detection System

    Energy Technology Data Exchange (ETDEWEB)

    Im, Jae Joong; Yi, Young Ju; Jeon, Young Ju [Chonbuk National University (Korea)

    2000-04-01

    A new auscultation system for the detection of breath sound from trachea was developed in house. Small size microphone(panasonic pin microphone) was encapsuled in a housing for resonant effect, and hardware for the sound detection was fabricated. Pulmonary function test results were compared with the parameters extracted from frequency spectrum of breath sound obtained from the developed system. Results showed that the peak frequency and relative ratio of integral values between low(80-400Hz) and high(400-800Hz) frequency ranges revealed the significant differences. Developed system could be used for distinguishing normal subject and the patients who have pulmonary disease. (author). 13 refs., 9 figs.

  2. Comparison of RASS temperature profiles with other tropospheric soundings

    International Nuclear Information System (INIS)

    Bonino, G.; Lombardini, P.P.; Trivero, P.

    1980-01-01

    The vertical temperature profile of the lower troposphere can be measured with a radio-acoustic sounding system (RASS). A comparison of the thermal profiles measured with the RASS and with traditional methods shows a) RASS ability to produce vertical thermal profiles at an altitude range of 170 to 1000 m with temperature accuracy and height discrimination comparable with conventional soundings, b) advantages of remote sensing as offered by new sounder, c) applicability of RASS both in assessing evolution of thermodynamic conditions in PBL and in sensing conditions conducive to high concentrations of air pollutants at the ground level. (author)

  3. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  4. Audibility of individual reflections in a complete sound field, III

    DEFF Research Database (Denmark)

    Bech, Søren

    1996-01-01

    This paper reports on the influence of individual reflections on the auditory localization of a loudspeaker in a small room. The sound field produced by a single loudspeaker positioned in a normal listening room has been simulated using an electroacoustic setup. The setup models the direct sound......-independent absorption coefficients of the room surfaces, and (2) a loudspeaker with directivity according to a standard two-way system and absorption coefficients according to real materials. The results have shown that subjects can distinguish reliably between timbre and localization, that the spectrum level above 2 k...

  5. How Do “Mute” Cicadas Produce Their Calling Songs?

    Science.gov (United States)

    Luo, Changqing; Wei, Cong; Nansen, Christian

    2015-01-01

    Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as “mute”. This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the “mute” cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism. PMID:25714608

  6. Advances in the development of an integrated data assimilation and sounding system

    International Nuclear Information System (INIS)

    Dabberdt, W.F.; Parsons, D.; Kuo, Y.H.; Dudhia, J.; Guo, Y.R.; Van Baelen, J.; Martin, C.; Oncley, S.

    1994-01-01

    The Integrated Data Assimilation and Sounding System (IDASS) provides continuous high-resolution tropospheric profiles. The measurement system (Integrated Sounding System, or ISS) is developed around a suite of in situ and active and passive remote sensors. Observations from ISS networks provide a high-resolution description of atmospheric structure on the mesoscale. Measurements are coupled with a state-of-the-art mesoscale modeling system. The mesoscale data assimilation scheme is the Newtonian nudging technique. In the mesoscale data assimilation process, observations of wind, temperature, and humidity are used to nudge or relax the time-dependent model variables to the observed values. The end product is a highly resolved four-dimensional meteorological data set (including three components of wind, temperature, humidity, cloud water, and integrated moisture)

  7. How to take absorptive surfaces into account when designing outdoor sound reinforcement systems

    DEFF Research Database (Denmark)

    Rasmussen, Karsten bo

    1996-01-01

    When sound reinforcement systems are used outdoors, absorptive surfaces are usually present along the propagation path of the sound. This may lead to a very significant colouration of the spectrum received by the audience. The colouration depends on the location and directivity of the loudspeaker......, the nature of the absorptive surface (eg grass) and the location of the audience. It is discussed how this effect may be calculated and numerical examples are shown. The results show a significant colouration and attenuation of the sound due to grass-covered surfaces....

  8. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  9. An integrated experimental and computational approach to material selection for sound proof thermally insulted enclosure of a power generation system

    Science.gov (United States)

    Waheed, R.; Tarar, W.; Saeed, H. A.

    2016-08-01

    Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.

  10. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  11. Concurrent Acoustic Activation of the Medial Olivocochlear System Modifies the After-Effects of Intense Low-Frequency Sound on the Human Inner Ear.

    Science.gov (United States)

    Kugler, Kathrin; Wiegrebe, Lutz; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2015-12-01

    >Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term 'bounce phenomenon' has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca(2+) influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca(2+) concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca(2+) concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz-8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support

  12. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  13. Numerical Model of the Human Cardiovascular System-Korotkoff Sounds Simulation

    Czech Academy of Sciences Publication Activity Database

    Maršík, František; Převorovská, Světlana; Brož, Z.; Štembera, V.

    Vol.4, č. 2 (2004), s. 193-199 ISSN 1432-9077 R&D Projects: GA ČR GA106/03/1073 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiovascular system * Korotkoff sounds * numerical simulation Subject RIV: BK - Fluid Dynamics

  14. MECHANICAL HEART-VALVE PROSTHESES - SOUND LEVEL AND RELATED COMPLAINTS

    NARCIS (Netherlands)

    LAURENS, RRP; WIT, HP; EBELS, T

    In a randomised study, we investigated the sound production of mechanical heart valve prostheses and the complaints related to this sound. The CarboMedics, Bjork-Shiley monostrut and StJude Medical prostheses were compared. A-weighted levels of the pulse-like sound produced by the prosthesis were

  15. Interface for Barge-in Free Spoken Dialogue System Based on Sound Field Reproduction and Microphone Array

    Directory of Open Access Journals (Sweden)

    Hinamoto Yoichi

    2007-01-01

    Full Text Available A barge-in free spoken dialogue interface using sound field control and microphone array is proposed. In the conventional spoken dialogue system using an acoustic echo canceller, it is indispensable to estimate a room transfer function, especially when the transfer function is changed by various interferences. However, the estimation is difficult when the user and the system speak simultaneously. To resolve the problem, we propose a sound field control technique to prevent the response sound from being observed. Combined with a microphone array, the proposed method can achieve high elimination performance with no adaptive process. The efficacy of the proposed interface is ascertained in the experiments on the basis of sound elimination and speech recognition.

  16. Design and qualification of an UHV system for operation on sounding rockets

    Energy Technology Data Exchange (ETDEWEB)

    Grosse, Jens, E-mail: jens.grosse@dlr.de; Braxmaier, Claus [Center of Applied Space Technology and Microgravity (ZARM), University of Bremen, Bremen, 28359, Germany and German Aerospace Center (DLR) Bremen, Bremen, 28359 (Germany); Seidel, Stephan Tobias; Becker, Dennis; Lachmann, Maike Diana [Institute of Quantum Optics, Leibniz University Hanover, Hanover, 30167 (Germany); Scharringhausen, Marco [German Aerospace Center (DLR) Bremen, Bremen, 28359 (Germany); Rasel, Ernst Maria [Institute of Quantum Optics, Leibniz University Hanover, Hanover, 30167, Bremen (Germany)

    2016-05-15

    The sounding rocket mission MAIUS-1 has the objective to create the first Bose–Einstein condensate in space; therefore, its scientific payload is a complete cold atom experiment built to be launched on a VSB-30 sounding rocket. An essential part of the setup is an ultrahigh vacuum system needed in order to sufficiently suppress interactions of the cooled atoms with the residual background gas. Contrary to vacuum systems on missions aboard satellites or the international space station, the required vacuum environment has to be reached within 47 s after motor burn-out. This paper contains a detailed description of the MAIUS-1 vacuum system, as well as a description of its qualification process for the operation under vibrational loads of up to 8.1 g{sub RMS} (where RMS is root mean square). Even though a pressure rise dependent on the level of vibration was observed, the design presented herein is capable of regaining a pressure of below 5 × 10{sup −10} mbar in less than 40 s when tested at 5.4 g{sub RMS}. To the authors' best knowledge, it is the first UHV system qualified for operation on a sounding rocket.

  17. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  18. Interactively Evolving Compositional Sound Synthesis Networks

    DEFF Research Database (Denmark)

    Jónsson, Björn Þór; Hoover, Amy K.; Risi, Sebastian

    2015-01-01

    the space of potential sounds that can be generated through such compositional sound synthesis networks (CSSNs). To study the effect of evolution on subjective appreciation, participants in a listener study ranked evolved timbres by personal preference, resulting in preferences skewed toward the first......While the success of electronic music often relies on the uniqueness and quality of selected timbres, many musicians struggle with complicated and expensive equipment and techniques to create their desired sounds. Instead, this paper presents a technique for producing novel timbres that are evolved...

  19. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    Science.gov (United States)

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  20. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  1. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  2. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    Science.gov (United States)

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  3. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  4. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  5. Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.

    Science.gov (United States)

    Stilp, Christian E; Assgari, Ashley A

    2018-02-28

    Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur

  6. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  7. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    Science.gov (United States)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  8. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  9. Modelling of horn-type loudspeakers for outdoor sound reinforcement systems

    DEFF Research Database (Denmark)

    Schuhmacher, Andreas; Rasmussen, Karsten Bo

    1999-01-01

    -type loudspeakers is made. The agreement between measured and calculated results is very good provided that a sufficient number of modes is included in the simulation. Simulation models of this kind represent one of the first steps towards a CAD tool for outdoor sound reinforcement systems....

  10. Temperature dependence of sound velocity in yttrium ferrite

    International Nuclear Information System (INIS)

    L'vov, V.A.

    1979-01-01

    The effect of the phonon-magnon and phonon-phonon interoctions on the temperature dependence of the longitudinal sound velocity in yttrium ferrite is considered. It has been shown that at low temperatures four-particle phonon-magnon processes produce the basic contribution to renormalization of the sound velocity. At higher temperatures the temperature dependence of the sound velocity is mainly defined by phonon-phonon processes

  11. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  12. Offshore dredger sound: source levels, sound maps and risk assessment (abstract)

    NARCIS (Netherlands)

    Jong, C.A.F. de; Ainslie, M.A.; Heinis, F.; Janmaat, J.

    2013-01-01

    The Port of Rotterdam is expanding to meet the growing demand to accommodate large cargo vessels. One of the licensing conditions was the monitoring of the underwater sound produced during its construction, with an emphasis on the establishment of acoustic source levels of the Trailing Suction

  13. A sound worth saving: acoustic characteristics of a massive fish spawning aggregation.

    Science.gov (United States)

    Erisman, Brad E; Rowell, Timothy J

    2017-12-01

    Group choruses of marine animals can produce extraordinarily loud sounds that markedly elevate levels of the ambient soundscape. We investigated sound production in the Gulf corvina ( Cynoscion othonopterus ), a soniferous marine fish with a unique reproductive behaviour threatened by overfishing, to compare with sounds produced by other marine animals. We coupled echosounder and hydrophone surveys to estimate the magnitude of the aggregation and sounds produced during spawning. We characterized individual calls and documented changes in the soundscape generated by the presence of as many as 1.5 million corvina within a spawning aggregation spanning distances up to 27 km. We show that calls by male corvina represent the loudest sounds recorded in a marine fish, and the spatio-temporal magnitude of their collective choruses are among the loudest animal sounds recorded in aquatic environments. While this wildlife spectacle is at great risk of disappearing due to overfishing, regional conservation efforts are focused on other endangered marine animals. © 2017 The Author(s).

  14. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  15. Rainsticks: Integrating Culture, Folklore, and the Physics of Sound

    Science.gov (United States)

    Moseley, Christine; Fies, Carmen

    2007-01-01

    The purpose of this activity is for students to build a rainstick out of materials in their own environment and imitate the sound of rain while investigating the physical principles of sound. Students will be able to relate the sound produced by an instrument to the type and quantity of materials used in its construction.

  16. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  17. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  18. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  19. Human amygdala activation by the sound produced during dental treatment: A fMRI study

    Directory of Open Access Journals (Sweden)

    Jen-Fang Yu

    2015-01-01

    Full Text Available During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7 aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL. As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli.

  20. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  1. The effects of a sound-field amplification system on managerial time in middle school physical education settings.

    Science.gov (United States)

    Ryan, Stu

    2009-04-01

    The focus of this research effort was to examine the effect of a sound-field amplification system on managerial time in the beginning of class in a physical education setting. A multiple baseline design across participants was used to measure change in the managerial time of 2 middle school female physical education teachers using a portable sound-field amplification system. Managerial time is defined as the cumulative amount of time that students spend on organizational, transitional, and nonsubject matter tasks in a lesson. The findings showed that the amount of managerial time at the beginning of class clearly decreased when the teacher used sound-field amplification feedback to physical education students. Findings indicate an immediate need for administrators to determine the most appropriate, cost-effective procedure to support sound-field amplification systems in existing physical education settings.

  2. Alternative Paths to Hearing (A Conjecture. Photonic and Tactile Hearing Systems Displaying the Frequency Spectrum of Sound

    Directory of Open Access Journals (Sweden)

    E. H. Hara

    2006-01-01

    Full Text Available In this article, the hearing process is considered from a system engineering perspective. For those with total hearing loss, a cochlear implant is the only direct remedy. It first acts as a spectrum analyser and then electronically stimulates the neurons in the cochlea with a number of electrodes. Each electrode carries information on the separate frequency bands (i.e., spectrum of the original sound signal. The neurons then relay the signals in a parallel manner to the section of the brain where sound signals are processed. Photonic and tactile hearing systems displaying the spectrum of sound are proposed as alternative paths to the section of the brain that processes sound. In view of the plasticity of the brain, which can rewire itself, the following conjectures are offered. After a certain period of training, a person without the ability to hear should be able to decipher the patterns of photonic or tactile displays of the sound spectrum and learn to ‘hear’. This is very similar to the case of a blind person learning to ‘read’ by recognizing the patterns created by the series of bumps as their fingers scan the Braille writing. The conjectures are yet to be tested. Designs of photonic and tactile systems displaying the sound spectrum are outlined.

  3. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  4. THE MODULATED SOUNDS MADE BY THE TSETSE FLY ...

    African Journals Online (AJOL)

    Tsetse flies produce modulated sounds, variously described as singing, buzzing, squeaking or pinging. The calls are closely related to the vital functions of the community namely hunting, feeding, mating and larviposition. The ecological significance of this faculty, therefore, needs further investigation. The flight sounds ...

  5. Exploring Sound-Motion Textures in drum set performance

    DEFF Research Database (Denmark)

    Godøy, Rolf Inge; song, minho; Dahl, Sofia

    2017-01-01

    A musical texture, be that of an ensemble or of a solo in- strumentalist, may be perceived as combinations of both simultaneous and sequential sound events. However, we believe that also sensations of the corresponding sound- producing events (e.g. hitting, stroking, bowing, blowing) contribute...

  6. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    The French Agency for Food, Environmental and Occupational Health and Safety (ANSES) reiterates that wind turbines emit infra-sounds (sound below 20 Hz) and low-frequency sounds. There are also other sources of infra-sound emissions that can be natural (wind in particular) or anthropogenic (heavy-goods vehicles, heat pumps, etc.). The measurement campaigns undertaken during the expert appraisal enabled these emissions from three wind farms to be characterised. In general, only very high intensities of infra-sound can be heard or perceived by humans. At the minimum distance (of 500 metres) separating homes from wind farm sites set out by the regulations, the infra-sounds produced by wind turbines do not exceed hearing thresholds. Therefore, the disturbance related to audible noise potentially felt by people around wind farms mainly relates to frequencies above 50 Hz. The expert appraisal showed that mechanisms for health effects grouped under the term 'vibro-acoustic disease', reported in certain publications, have no serious scientific basis. There have been very few scientific studies on the potential health effects of infra-sounds and low frequencies produced by wind turbines. The review of these experimental and epidemiological data did not find any adequate scientific arguments for the occurrence of health effects related to exposure to noise from wind turbines, other than disturbance related to audible noise and a nocebo effect, which can help explain the occurrence of stress-related symptoms experienced by residents living near wind farms. However, recently acquired knowledge on the physiology of the cochlea-vestibular system has revealed physiological effects in animals induced by exposure to high-intensity infra-sounds. These effects, while plausible in humans, have yet to be demonstrated for exposure to levels comparable to those observed in residents living near wind farms. Moreover, the connection between these physiological effects and the occurrence of

  7. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  8. Research of Control System and Fault Diagnosis of the Sound-absorbing Board Production Line

    Directory of Open Access Journals (Sweden)

    Yanjun Xiao

    2014-08-01

    Full Text Available Programmable Logic Controller is the core of the control system of the sound- absorbing board production line and the design of fault diagnosis is an essential modules in the sound- absorbing board production line. The article discourses the application of PLC in the control system of the production line, and designs the methods of grading treatment and prevention of troubles, which makes use of PLC’S logic functions. The method has good expansibility, and has good guidance to the fault diagnosis in other automation equipments.

  9. Low frequency sound field enhancement system for rectangular rooms using multiple low frequency loudspeakers

    DEFF Research Database (Denmark)

    Celestinos, Adrian; Nielsen, Sofus Birkedal

    2006-01-01

    an enhancement system with extra loudspeakers the sound pressure level distribution along the listening area presents a significant improvement in the subwoofer frequency range. The system is simulated and implemented on the three different rooms and finally verified by measurements on the real rooms.......Rectangular rooms have strong influence on the low frequency performance of loudspeakers. Simulations of three different room sizes have been carried out using finite-difference time-domain method (FDTD) in order to predict the behaviour of the sound field at low frequencies. By using...

  10. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  11. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  12. COMBINED EFFECT OF THE AIRBORNE AND IMPACT NOISE PRODUCED ONTO THE SOUND INSULATION OF INSERTED FLOORS OF RESIDENTIAL BUILDINGS: THEORETICAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Saltykov Ivan Petrovich

    2012-10-01

    Full Text Available The indoor environment of residential buildings is a complex system. It consists of diverse though related elements. An optimal correlation of parameters of the indoor space converts into the appropriate equilibrium and harmonious human living free from any stimulating or irritating factors that interfere with any working and/or relaxation processes. The author has selected the following three principal factors of the indoor environment. They include heat, daylight and sound. The research has revealed a strong linkn between these factors. Noise pollution of residential houses is taken into account through the introduction of the airborne insulation index and the impact sound index underneath the inserted floor. The findings of theoretical researches and experiments have proven a strong functional relationship between airborne and impact sound values.

  13. Proximal mechanisms for sound production in male Pacific walruses

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Reichmuth, Colleen

    2012-01-01

    features more similar to those found in industrial work places than in nature. The patterned knocks and bells that comprise male songs are not thought to be true vocalizations, but rather, sounds produced with structures other than the vocal tract and larynx. To determine how male walruses produce and emit......The songs of male walruses during the breeding season have been noted to have some of the most unusual characteristics that have been observed among mammalian sounds. In contrast to the more guttural vocalizations of most other carnivores, their acoustic displays have impulsive and metallic...... anatomical origins of knocking and bell sounds and gained a mechanistic understanding of how these sounds are generated within the body and transmitted to the environment. These pathways are illustrated with acoustic and video data and considered with respect to the unique biology of this species....

  14. Tool-use-associated sound in the evolution of language.

    Science.gov (United States)

    Larsson, Matz

    2015-09-01

    Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.

  15. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  16. Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds.

    Directory of Open Access Journals (Sweden)

    Tomomi Yamada

    Full Text Available The sound produced by a dental air turbine handpiece (dental drill can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: "metallic and unpleasant" and "powerful". LAeq had a strong relationship with "powerful impression", calculated sharpness was positively related to "metallic impression", and "unpleasant impression" was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a

  17. Design, development and test of the gearbox condition monitoring system using sound signal processing

    Directory of Open Access Journals (Sweden)

    M Zamani

    2016-09-01

    Full Text Available Introduction One of the ways used for minimizing the cost of maintenance and repairs of rotating industrial equipment is condition monitoring using acoustic analysis. One of the most important problems which always have been under consideration in industrial equipment application is confidence possibility. Each dynamic, electrical, hydraulic or thermal system has certain characteristics which show the normal condition of the machine during function. Any changes of the characteristics can be a signal of a problem in the machine. The aim of condition monitoring is system condition determination using measurements of the signals of characteristics and using this information for system impairment prognostication. There are a lot of ways for condition monitoring of different systems, but sound analysis is accepted and used extensively as a method for condition investigation of rotating machines. The aim of this research is the design and construction of considered gearbox and using of obtaining data in frequency and time spectrum in order to analyze the sound and diagnosis. Materials and Methods This research was conducted at the department of mechanical biosystem workshop at Aboureihan College at Tehran University in February 15th.2015. In this research, in order to investigate the trend of diagnosis and gearbox condition, a system was designed and then constructed. The sound of correct and damaged gearbox was investigated by audiometer and stored in computer for data analysis. Sound measurement was done in three pinions speed of 749, 1050 and 1496 rpm and for correct gearboxes, damage of the fracture of a tooth and a tooth wear. Gearbox design and construction: In order to conduct the research, a gearbox with simple gearwheels was designed according to current needs. Then mentioned gearbox and its accessories were modeled in CATIA V5-R20 software and then the system was constructed. Gearbox is a machine that is used for mechanical power transition

  18. Speed of sound in hadronic matter using non-extensive statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Jean Cleymans

    2015-01-01

    The evolution of the dense matter formed in high energy hadronic and nuclear collisions is controlled by the initial energy density and temperature. The expansion of the system is due to the very high initial pressure with lowering of temperature and energy density. The pressure (P) and energy density (ϵ) are related through speed of sound (c 2 s ) under the condition of local thermal equilibrium. The speed of sound plays a crucial role in hydrodynamical expansion of the dense matter created and the critical behaviour of the system evolving from deconfined Quark Gluon Phase (QGP) to confined hadronic phase. There have been several experimental and theoretical studies in this direction. The non-extensive Tsallis statistics gives better description of the transverse momentum spectra of the produced particles created in high energy p + p (p¯) and e + + e - collisions

  19. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  20. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    Science.gov (United States)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  1. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  2. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes.

    Science.gov (United States)

    Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.

  3. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  4. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  5. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  6. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  7. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  8. A Low Cost GPS System for Real-Time Tracking of Sounding Rockets

    Science.gov (United States)

    Markgraf, M.; Montenbruck, O.; Hassenpflug, F.; Turner, P.; Bull, B.; Bauer, Frank (Technical Monitor)

    2001-01-01

    This paper describes the development as well as the on-ground and the in-flight evaluation of a low cost Global Positioning System (GPS) system for real-time tracking of sounding rockets. The flight unit comprises a modified ORION GPS receiver and a newly designed switchable antenna system composed of a helical antenna in the rocket tip and a dual-blade antenna combination attached to the body of the service module. Aside from the flight hardware a PC based terminal program has been developed to monitor the GPS data and graphically displays the rocket's path during the flight. In addition an Instantaneous Impact Point (IIP) prediction is performed based on the received position and velocity information. In preparation for ESA's Maxus-4 mission, a sounding rocket test flight was carried out at Esrange, Kiruna, on 19 Feb. 2001 to validate existing ground facilities and range safety installations. Due to the absence of a dedicated scientific payload, the flight offered the opportunity to test multiple GPS receivers and assess their performance for the tracking of sounding rockets. In addition to the ORION receiver, an Ashtech G12 HDMA receiver and a BAE (Canadian Marconi) Allstar receiver, both connected to a wrap-around antenna, have been flown on the same rocket as part of an independent experiment provided by the Goddard Space Flight Center. This allows an in-depth verification and trade-off of different receiver and antenna concepts.

  9. On-Chip electric power generation system from sound of portable music plyers and smartphones towerd portable uTAS

    NARCIS (Netherlands)

    Naito, T.; Kaji, N.; le Gac, Severine; Tokeshi, M.; van den Berg, Albert; Baba, Y.; Fujii, T.; Hibara, A.; Takeuchi, S.; Fukuba, T.

    2012-01-01

    This paper demonstrates electric generation from sound to minimize and integrate microfluidic systems for point of care testing or in-situ analysis. In this work, 5.4 volts and 50 mW DC was generated from sound through an earphone cable, which is a versatile system and able to actuate small size and

  10. Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems (ERASMUS)

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Geoffrey [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States)

    2016-06-30

    The use of small unmanned aircraft systems (sUAS) with miniature sensor systems for atmospheric research is an important capability to develop. The Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems (ERASMUS) project, lead by Dr. Gijs de Boer of the Cooperative Institute for Research in Environmental Sciences (CIRES- a partnership of NOAA and CU-Boulder), is a significant milestone in realizing this new potential. This project has clearly demonstrated that the concept of sUAS utilization is valid, and miniature instrumentation can be used to further our understanding of the atmospheric boundary layer in the arctic.

  11. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  12. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  13. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  14. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  15. Sound Visualisation

    OpenAIRE

    Dolenc, Peter

    2013-01-01

    This thesis contains a description of a construction of subwoofer case that has an extra functionality of being able to produce special visual effects and display visualizations that match the currently playing sound. For this reason, multiple lighting elements made out of LED (Light Emitting Diode) diodes were installed onto the subwoofer case. The lighting elements are controlled by dedicated software that was also developed. The software runs on STM32F4-Discovery evaluation board inside a ...

  16. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    Science.gov (United States)

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Kotaro Hoshiba

    2017-11-01

    Full Text Available In search and rescue activities, unmanned aerial vehicles (UAV should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  19. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  20. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    Science.gov (United States)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  1. ‘And she flies! Beautiful’: the dislocating geography of football sound

    Directory of Open Access Journals (Sweden)

    Margaret Trail

    2013-10-01

    Full Text Available The overarching interest of this paper is in articulating the affective conditions of football’s play. It undertakes this through a consideration of the sonorous dimension of football, mapping its sounds across a framework borrowed from recent writings on sound-art and sonic geography. Specifically it considers a continuum articulated by Will Scrimshaw (in relation to sound art exploring spatial notions, between sounds-of-place and sound-as-a-place. It then places sounds produced in football-play across this continuum, to see whether football’s sonic practices can be more finely articulated through doing so, and might in turn shed light on its affective conditions.

  2. Urban sound energy reduction by means of sound barriers

    Science.gov (United States)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  3. Urban sound energy reduction by means of sound barriers

    Directory of Open Access Journals (Sweden)

    Iordache Vlad

    2018-01-01

    Full Text Available In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  4. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  5. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  6. Measuring the speed of sound in air using smartphone applications

    Science.gov (United States)

    Yavuz, A.

    2015-05-01

    This study presents a revised version of an old experiment available in many textbooks for measuring the speed of sound in air. A signal-generator application in a smartphone is used to produce the desired sound frequency. Nodes of sound waves in a glass pipe, of which one end is immersed in water, are more easily detected, so results can be obtained more quickly than from traditional acoustic experiments using tuning forks.

  7. Thermo-active building systems and sound absorbers: Thermal comfort under real operation conditions

    DEFF Research Database (Denmark)

    Köhler, Benjamin; Rage, Nils; Chigot, Pierre

    2018-01-01

    Radiant systems are established today and have a high ecological potential in buildings while ensuring thermal comfort. Free-hanging sound absorbers are commonly used for room acoustic control, but can reduce the heat exchange when suspended under an active slab. The aim of this study...... is to evaluate the impact on thermal comfort of horizontal and vertical free-hanging porous sound absorbers placed in rooms of a building cooled by Thermo-Active Building System (TABS), under real operation conditions. A design comparing five different ceiling coverage ratios and two room types has been...... implemented during three measurement periods. A clear correlation between increase of ceiling coverage ratio and reduction of thermal comfort could not be derived systematically for each measurement period and room type, contrarily to what was expected from literature. In the first two monitoring periods...

  8. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  9. Combined multibeam and bathymetry data from Rhode Island Sound and Block Island Sound: a regional perspective

    Science.gov (United States)

    Poppe, Lawrence J.; McMullen, Katherine Y.; Danforth, William W.; Blankenship, Mark R.; Clos, Andrew R.; Glomb, Kimberly A.; Lewit, Peter G.; Nadeau, Megan A.; Wood, Douglas A.; Parker, Castleton E.

    2014-01-01

    Detailed bathymetric maps of the sea floor in Rhode Island and Block Island Sounds are of great interest to the New York, Rhode Island, and Massachusetts research and management communities because of this area's ecological, recreational, and commercial importance. Geologically interpreted digital terrain models from individual surveys provide important benthic environmental information, yet many applications of this information require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 14 contiguous multibeam bathymetric datasets that were produced by the National Oceanic and Atmospheric Administration during charting operations into one digital terrain model that covers much of Block Island Sound and extends eastward across Rhode Island Sound. The new dataset, which covers over 1244 square kilometers, is adjusted to mean lower low water, gridded to 4-meter resolution, and provided in Universal Transverse Mercator Zone 19, North American Datum of 1983 and geographic World Geodetic Survey of 1984 projections. This resolution is adequate for sea-floor feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the data include boulder lag deposits of winnowed Pleistocene strata, sand-wave fields, and scour depressions that reflect the strength of oscillating tidal currents and scour by storm-induced waves. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic features visible in the data include shipwrecks and dredged channels. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for

  10. Dependence of sound characteristics on the bowing position in a violin

    Science.gov (United States)

    Roh, YuJi; Kim, Young H.

    2014-12-01

    A quantitative analysis of violin sounds produced for different bowing positions over the full length of a violin string has been carried out. An automated bowing machine was employed in order to keep the bowing parameters constant. A 3-dimensional profile of the frequency spectrum was introduced in order to characterize the violin's sound. We found that the fundamental frequency did not change for different bowing positions, whereas the frequencies of the higher harmonics were different. Bowing the string at 30 mm from the bridge produced musical sounds. The middle of the string was confirmed to be a dead zone, as reported in previous works. In addition, the quarter position was also found to be a dead zone. Bowing the string 90 mm from the bridge dominantly produces a fundamental frequency of 864 Hz and its harmonics.

  11. Data Analysis of the TK-1G Sounding Rocket Installed with a Satellite Navigation System

    Directory of Open Access Journals (Sweden)

    Lesong Zhou

    2017-10-01

    Full Text Available This article gives an in-depth analysis of the experimental data of the TK-1G sounding rocket installed with the satellite navigation system. It turns out that the data acquisition rate of the rocket sonde is high, making the collection of complete trajectory and meteorological data possible. By comparing the rocket sonde measurements with those obtained by virtue of other methods, we find that the rocket sonde can be relatively precise in measuring atmospheric parameters within the scope of 20–60 km above the ground. This establishes the fact that the TK-1G sounding rocket system is effective in detecting near-space atmospheric environment.

  12. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  13. The Influence of Sanskrit on the Japanese Sound Systems.

    Science.gov (United States)

    Buck, James H.

    The Japanese syllabary of today would probably not exist in its present arrangement had it not been for Sanskrit studies in Japan. Scholars of ancient Japan extracted from the Devanagari those sounds which corresponded to sounds in Japanese and arranged the Japanese syllabary in the devanagari order. First appearing in a document dated 1204, this…

  14. The sound and the fury--bees hiss when expecting danger.

    Science.gov (United States)

    Wehmann, Henja-Niniane; Gustav, David; Kirkerud, Nicholas H; Galizia, C Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  15. Focused sound: oncological therapy for transformed tissue

    International Nuclear Information System (INIS)

    Mares, C. E.; Cordova F, T.; Hernandez, A.

    2017-10-01

    The restlessness of the human being involves observing and being critical through their senses, in particular a disturbance in the environment cause vibrations that can be registered by the sense of hearing through the eardrum, if what it produces is in the frequency of the audible sound. The distinction of the sound of the other forms of energy transfer is that the waves of the same quickly involve the progressive return of displacements or vibrations of the molecules in the medium that propagates. In this work a sweep of frequencies was made from infra sound to ultrasound in plants of different types with different thicknesses and two people in order to find the resonance of each of them and compare it with the resonances registered in text, which allowed evaluate the secondary effect of sound focused on the tissue of the leaves and in particular of people. We consider that there is potential for this focused sound modality if it is at the resonance frequency of the transformed tissue as a means of oncological therapy without affecting the neighboring cells. (Author)

  16. Integrated Human Factors Design Guidelines for Sound Interface

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Lee, Yong Hee; Oh, In Seok; Lee, Hyun Chul; Cha, Woo Chang

    2004-05-01

    Digital MMI, such as CRT, LCD etc., has been used increasingly in the design of main control room of the Korean standard nuclear power plants following the YGN units 3 and 4. The utilization of digital MMI may introduce various kind of sound interface into the control room design. In this project, for five top-level guideline items, including Sound Formats, Alarms, Sound Controls, Communications, and Environments, a total of 147 detail guidelines were developed and a database system for these guidelines was developed. The integrated human factors design guidelines for sound interface and the database system developed in this project will be useful for the design of sound interface of digital MMI in Korean NPPs

  17. Integrated Human Factors Design Guidelines for Sound Interface

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Lee, Yong Hee; Oh, In Seok; Lee, Hyun Chul [KAERI, Daejeon (Korea, Republic of); Cha, Woo Chang [Kumoh National Univ. of Technology, Gumi (Korea, Republic of)

    2004-05-15

    Digital MMI, such as CRT, LCD etc., has been used increasingly in the design of main control room of the Korean standard nuclear power plants following the YGN units 3 and 4. The utilization of digital MMI may introduce various kind of sound interface into the control room design. In this project, for five top-level guideline items, including Sound Formats, Alarms, Sound Controls, Communications, and Environments, a total of 147 detail guidelines were developed and a database system for these guidelines was developed. The integrated human factors design guidelines for sound interface and the database system developed in this project will be useful for the design of sound interface of digital MMI in Korean NPPs.

  18. Sound signatures and production mechanisms of three species of pipefishes (Family: Syngnathidae

    Directory of Open Access Journals (Sweden)

    Adam Chee Ooi Lim

    2015-12-01

    Full Text Available Background. Syngnathid fishes produce three kinds of sounds, named click, growl and purr. These sounds are generated by different mechanisms to give a consistent signal pattern or signature which is believed to play a role in intraspecific and interspecific communication. Commonly known sounds are produced when the fish feeds (click, purr or is under duress (growl. While there are more acoustic studies on seahorses, pipefishes have not received much attention. Here we document the differences in feeding click signals between three species of pipefishes and relate them to cranial morphology and kinesis, or the sound-producing mechanism.Methods. The feeding clicks of two species of freshwater pipefishes, Doryichthys martensii and Doryichthys deokhathoides and one species of estuarine pipefish, Syngnathoides biaculeatus, were recorded by a hydrophone in acoustic dampened tanks. The acoustic signals were analysed using time-scale distribution (or scalogram based on wavelet transform. A detailed time-varying analysis of the spectral contents of the localized acoustic signal was obtained by jointly interpreting the oscillogram, scalogram and power spectrum. The heads of both Doryichthys species were prepared for microtomographical scans which were analysed using a 3D imaging software. Additionally, the cranial bones of all three species were examined using a clearing and double-staining method for histological studies.Results. The sound characteristics of the feeding click of the pipefish is species-specific, appearing to be dependent on three bones: the supraoccipital, 1st postcranial plate and 2nd postcranial plate. The sounds are generated when the head of the Dorichthyes pipefishes flexes backward during the feeding strike, as the supraoccipital slides backwards, striking and pushing the 1st postcranial plate against (and striking the 2nd postcranial plate. In the Syngnathoides pipefish, in the absence of the 1st postcranial plate, the

  19. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  20. Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound

    Energy Technology Data Exchange (ETDEWEB)

    Brian Polagye; Jim Thomson; Chris Bassett; Jason Wood; Dom Tollit; Robert Cavagnaro; Andrea Copping

    2012-03-30

    community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project

  1. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...

  2. Analysis of sound pressure levels emitted by children's toys.

    Science.gov (United States)

    Sleifer, Pricila; Gonçalves, Maiara Santos; Tomasi, Marinês; Gomes, Erissandra

    2013-06-01

    To verify the levels of sound pressure emitted by non-certified children's toys. Cross-sectional study of sound toys available at popular retail stores of the so-called informal sector. Electronic, mechanical, and musical toys were analyzed. The measurement of each product was carried out by an acoustic engineer in an acoustically isolated booth, by a decibel meter. To obtain the sound parameters of intensity and frequency, the toys were set to produce sounds at a distance of 10 and 50cm from the researcher's ear. The intensity of sound pressure [dB(A)] and the frequency in hertz (Hz) were measured. 48 toys were evaluated. The mean sound pressure 10cm from the ear was 102±10 dB(A), and at 50cm, 94±8 dB(A), with ptoys was above 85dB(A). The frequency ranged from 413 to 6,635Hz, with 56.3% of toys emitting frequency higher than 2,000Hz. The majority of toys assessed in this research emitted a high level of sound pressure.

  3. Validation of a Perceptual Distraction Model in a Complex Personal Sound Zone System

    DEFF Research Database (Denmark)

    Rämö, Jussi; Marsh, Steven; Bech, Søren

    2016-01-01

    This paper evaluates a previously proposed perceptual model predicting user’s perceived distraction caused by interfering audio programmes. The distraction model was originally trained using a simple sound reproduction system for music-on-music interference situations and it has not been formally...

  4. Selective attention to sound location or pitch studied with fMRI.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  5. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  6. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  7. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  8. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  9. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  10. A noisy spring: the impact of globally rising underwater sound levels on fish.

    Science.gov (United States)

    Slabbekoorn, Hans; Bouton, Niels; van Opzeeland, Ilse; Coers, Aukje; ten Cate, Carel; Popper, Arthur N

    2010-07-01

    The underwater environment is filled with biotic and abiotic sounds, many of which can be important for the survival and reproduction of fish. Over the last century, human activities in and near the water have increasingly added artificial sounds to this environment. Very loud sounds of relatively short exposure, such as those produced during pile driving, can harm nearby fish. However, more moderate underwater noises of longer duration, such as those produced by vessels, could potentially impact much larger areas, and involve much larger numbers of fish. Here we call attention to the urgent need to study the role of sound in the lives of fish and to develop a better understanding of the ecological impact of anthropogenic noise. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Sound spectrum of a pulsating optical discharge

    Energy Technology Data Exchange (ETDEWEB)

    Grachev, G N; Smirnov, A L; Tishchenko, V N [Institute of Laser Physics, Siberian Branch, Russian Academy of Sciences, Novosibirsk (Russian Federation); Dmitriev, A K; Miroshnichenko, I B [Novosibirsk State Technical University (Russian Federation)

    2016-02-28

    A spectrum of sound of an optical discharge generated by a repetitively pulsed (RP) laser radiation has been investigated. The parameters of laser radiation are determined at which the spectrum of sound may contains either many lines, or the main line at the pulse repetition rate and several weaker overtones, or a single line. The spectrum of sound produced by trains of RP radiation comprises the line (and overtones) at the repetition rate of train sequences and the line at the repetition rate of pulses in trains. A CO{sub 2} laser with the pulse repetition rate of f ≈ 3 – 180 kHz and the average power of up to 2 W was used in the experiments. (optical discharges)

  12. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  13. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  14. Ultra-thin smart acoustic metasurface for low-frequency sound insulation

    Science.gov (United States)

    Zhang, Hao; Xiao, Yong; Wen, Jihong; Yu, Dianlong; Wen, Xisen

    2016-04-01

    Insulating low-frequency sound is a conventional challenge due to the high areal mass required by mass law. In this letter, we propose a smart acoustic metasurface consisting of an ultra-thin aluminum foil bonded with piezoelectric resonators. Numerical and experimental results show that the metasurface can break the conventional mass law of sound insulation by 30 dB in the low frequency regime (sound insulation performance is attributed to the infinite effective dynamic mass density produced by the smart resonators. It is also demonstrated that the excellent sound insulation property can be conveniently tuned by simply adjusting the external circuits instead of modifying the structure of the metasurface.

  15. A Real Time Differential GPS Tracking System for NASA Sounding Rockets

    Science.gov (United States)

    Bull, Barton; Bauer, Frank (Technical Monitor)

    2000-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads to several hundred miles in altitude. These missions return a variety of scientific data including: chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices to be used on satellites and other spacecraft prior to their use in these more expensive missions. Typically around thirty of these rockets are launched each year, from established ranges at Wallops Island, Virginia; Poker Flat Research Range, Alaska; White Sands Missile Range, New Mexico and from a number of ranges outside the United States. Many times launches are conducted from temporary launch ranges in remote parts of the world requiring considerable expense to transport and operate tracking radars. In order to support these missions, an inverse differential GPS system has been developed. The flight system consists of a small, inexpensive receiver, a preamplifier and a wrap-around antenna. A rugged, compact, portable ground station extracts GPS data from the raw payload telemetry stream, performs a real time differential solution and graphically displays the rocket's path relative to a predicted trajectory plot. In addition to generating a real time navigation solution, the system has been used for payload recovery, timing, data timetagging, precise tracking of multiple payloads and slaving of optical tracking systems for over the horizon acquisition. This paper discusses, in detail, the flight and ground hardware, as well as data processing and operational aspects of the system, and provides evidence of the system accuracy.

  16. Brief report: sound output of infant humidifiers.

    Science.gov (United States)

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  17. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  18. Experimental and Numerical Study of the Effects of Acoustic Sound Absorbers on the Cooling Performance of Thermally Active Building Systems

    DEFF Research Database (Denmark)

    Domínguez, L. Marcos; Kazanci, Ongun Berk; Rage, Nils

    2017-01-01

    Free-hanging horizontal and vertical sound absorbers are commonly used in buildings for room acoustic control; however, when these sound absorbers are used in combination with Thermally Active Building Systems, they will decrease the cooling performance of Thermally Active Building Systems...... and this will affect the thermal indoor environment in that space. Therefore, it is crucial to be able to quantify and model these effects in the design phase. This study quantifies experimentally the effects of horizontal and vertical free-hanging sound absorbers on the cooling performance of Thermally Active......%, respectively. With vertical sound absorbers, the decrease in cooling performance was 8%, 12%, and 14% for the corresponding cases, respectively. The numerical model predicted closely the cooling performance reduction, air temperatures and ceiling surface temperatures in most cases, while there were differences...

  19. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  20. Technology for the Sound of Music

    Science.gov (United States)

    1994-01-01

    In the early 1960s during an industry recession, Kaman Aircraft lost several defense contracts. Forced to diversify, the helicopter manufacturer began to manufacture acoustic guitars. Kaman's engineers used special vibration analysis equipment based on aerospace technology. While a helicopter's rotor system is highly susceptible to vibration, which must be reduced or "dampened," vibration enhances a guitar's sound. After two years of vibration analysis Kaman produced an instrument, which is very successful. The Ovation guitar is made of fiberglass. It is stronger than the traditional rosewood and manufactured with adapted aircraft techniques such as jigs and fixtures, reducing labor and assuring quality and cost control. Kaman Music Corporation now has annual sales of $100 million.

  1. Optimal design of sound absorbing systems with microperforated panels

    Science.gov (United States)

    Kim, Nicholas Nakjoo

    As the development of technology makes economic prosperity and life more convenient, people now desire a higher quality of life. This quality of life is based not only on the convenience in their life but also on clean and eco-friendly environments. To meet that requirement, much research is being performed in many areas of eco-friendly technology, such as renewable energy, biodegradable content, and batteries for electronic vehicles. This tendency is also obvious in the acoustics area, where there are continuing attempts to replace fiber-glass sound absorbers with fiber-free materials. The combination of microperfoated panels (MPP) (one of the fiber-free sound absorbing materials), usually in the form of a thin panel with small holes, and an air backing may be one of the preferred solutions. These panels can be designed in many ways, and usually feature many small (sub-millimeter) holes and typically surface porosities on the order of 1 percent. The detailed acoustical properties of MPPs depend on their hole shape, the hole diameter, the thickness of the panel, the overall porosity of the perforated film, the film's mass per unit area, and the depth of the backing air cavity. Together, these parameters control the absorption peak location and the magnitude of the absorption coefficient (and the magnitude of the transmission loss in barrier applications). By an appropriate choice of these parameters good absorption performance can be achieved in a frequency range one or two octaves wide. That kind of solution may be adequate when it is necessary to control sound only in a specified frequency range (in the speech interference range, for example). However, in order to provide appropriate noise control solutions over a broader range of frequencies, it is necessary to design systems featuring multiple-layers of MPPs, thus creating what amounts to a multi-degree-of-freedom system and so expanding the range over which good absorption can be obtained. In this research

  2. Characterization of Underwater Sounds Produced by a Backhoe Dredge Excavating Rock and Gravel

    Science.gov (United States)

    2012-12-01

    bathymetry, hydrodynamic conditions, prevalence of non-dredging ambient sounds), this study fills important knowledge gaps that contribute to better... Beaver Mackenzie, peak spectral levels were 122 dB at 190 m with a peak frequency of 120 Hz. Received levels in the 20- to 1000-Hz band were 133 dB

  3. Contingent sounds change the mental representation of one's finger length.

    Science.gov (United States)

    Tajadura-Jiménez, Ana; Vakali, Maria; Fairhurst, Merle T; Mandrigin, Alisa; Bianchi-Berthouze, Nadia; Deroy, Ophelia

    2017-07-18

    Mental body-representations are highly plastic and can be modified after brief exposure to unexpected sensory feedback. While the role of vision, touch and proprioception in shaping body-representations has been highlighted by many studies, the auditory influences on mental body-representations remain poorly understood. Changes in body-representations by the manipulation of natural sounds produced when one's body impacts on surfaces have recently been evidenced. But will these changes also occur with non-naturalistic sounds, which provide no information about the impact produced by or on the body? Drawing on the well-documented capacity of dynamic changes in pitch to elicit impressions of motion along the vertical plane and of changes in object size, we asked participants to pull on their right index fingertip with their left hand while they were presented with brief sounds of rising, falling or constant pitches, and in the absence of visual information of their hands. Results show an "auditory Pinocchio" effect, with participants feeling and estimating their finger to be longer after the rising pitch condition. These results provide the first evidence that sounds that are not indicative of veridical movement, such as non-naturalistic sounds, can induce a Pinocchio-like change in body-representation when arbitrarily paired with a bodily action.

  4. An Analog I/O Interface Board for Audio Arduino Open Sound Card System

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    AudioArduino [1] is a system consisting of an ALSA (Advanced Linux Sound Architecture) audio driver and corresponding microcontroller code; that can demonstrate full-duplex, mono, 8-bit, 44.1 kHz soundcard behavior on an FTDI based Arduino. While the basic operation as a soundcard can...

  5. Vibrometry Assessment of the External Thermal Composite Insulation Systems Influence on the Façade Airborne Sound Insulation

    Directory of Open Access Journals (Sweden)

    Daniel Urbán

    2018-05-01

    Full Text Available This paper verifies the impact of the use of an external thermal composite system (ETICS on air-borne sound insulation. For optimum accuracy over a wide frequency range, classical microphone based transmission measurements are combined with accelerometer based vibrometry measurements. Consistency is found between structural resonance frequencies and bending wave velocity dispersion curves determined by vibrometry on the one hand and spectral features of the sound reduction index, the ETICS mass-spring-mass resonance induced dip in the acoustic insulation spectrum, and the coincidence induced dip on the other hand. Scanning vibrometry proves to be an effective tool for structural assessment in the design phase of ETICS systems. The measured spectra are obtained with high resolution in wide frequency range, and yield sound insulation values are not affected by the room acoustic features of the laboratory transmission rooms. The complementarity between the microphone and accelerometer based results allows assessing the effect of ETICS on the sound insulation spectrum in an extended frequency range from 20 Hz to 10 kHz. The modified engineering ΔR prediction model for frequency range up to coincidence frequency of external plaster layer is recommended. Values for the sound reduction index obtained by a modified prediction method are consistent with the measured data.

  6. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  7. Safety of the HyperSound® Audio System in subjects with normal hearing

    Directory of Open Access Journals (Sweden)

    Ritvik P. Mehta

    2015-11-01

    Full Text Available The objective of the study was to assess the safety of the HyperSound® Audio System (HSS, a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered preexposure and post-exposure test design. We investigated primary and secondary outcome measures: i temporary threshold shift (TTS, defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs >10 dB at two or more frequencies; ii presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  8. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    Science.gov (United States)

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  9. Enhancing engagement in multimodality environments by sound movement in a virtual space

    DEFF Research Database (Denmark)

    Götzen, Amalia De

    2004-01-01

    of instrumental sounds - has allowed space as a musical instrumental practice to flourish. Electro-acoustic technologies let composers explore new listening dimensions and consider the sounds coming from loudspeakers as possessing different logical meanings from the sounds produced by traditional instruments....... Medea, Adriano Guarnieri's "video opera", is an innovative work stemming from research in multimedia that demonstrates the importance and amount of research dedicated to sound movement in space. Medea is part of the Multi-sensory Expressive Gesture Application project (http://www.megaproject.org). Among...

  10. Anti-bat tiger moth sounds: Form and function

    Directory of Open Access Journals (Sweden)

    Aaron J. CORCORAN, William E. CONNER, Jesse R. BARBER

    2010-06-01

    Full Text Available The night sky is the venue of an ancient acoustic battle between echolocating bats and their insect prey. Many tiger moths (Lepidoptera: Arctiidae answer the attack calls of bats with a barrage of high frequency clicks. Some moth species use these clicks for acoustic aposematism and mimicry, and others for sonar jamming, however, most of the work on these defensive functions has been done on individual moth species. We here analyze the diversity of structure in tiger moth sounds from 26 species collected at three locations in North and South America. A principal components analysis of the anti-bat tiger moth sounds reveals that they vary markedly along three axes: (1 frequency, (2 duty cycle (sound production per unit time and frequency modulation, and (3 modulation cycle (clicks produced during flexion and relaxation of the sound producing tymbal structure. Tiger moth species appear to cluster into two distinct groups: one with low duty cycle and few clicks per modulation cycle that supports an acoustic aposematism function, and a second with high duty cycle and many clicks per modulation cycle that is consistent with a sonar jamming function. This is the first evidence from a community-level analysis to support multiple functions for tiger moth sounds. We also provide evidence supporting an evolutionary history for the development of these strategies. Furthermore, cross-correlation and spectrogram correlation measurements failed to support a “phantom echo” mechanism underlying sonar jamming, and instead point towards echo interference [Current Zoology 56 (3: 358–369, 2010].

  11. Left-Right Asymmetry in Spectral Characteristics of Lung Sounds Detected Using a Dual-Channel Auscultation System in Healthy Young Adults.

    Science.gov (United States)

    Tsai, Jang-Zern; Chang, Ming-Lang; Yang, Jiun-Yue; Kuo, Dar; Lin, Ching-Hsiung; Kuo, Cheng-Deng

    2017-06-07

    Though lung sounds auscultation is important for the diagnosis and monitoring of lung diseases, the spectral characteristics of lung sounds have not been fully understood. This study compared the spectral characteristics of lung sounds between the right and left lungs and between healthy male and female subjects using a dual-channel auscultation system. Forty-two subjects aged 18-22 years without smoking habits and any known pulmonary diseases participated in this study. The lung sounds were recorded from seven pairs of auscultation sites on the chest wall simultaneously. We found that in four out of seven auscultation pairs, the lung sounds from the left lung had a higher total power (P T ) than those from the right lung. The P T of male subjects was higher than that of female ones in most auscultation pairs. The ratio of inspiration power to expiration power (R I/E ) of lung sounds from the right lung was greater than that from the left lung at auscultation pairs on the anterior chest wall, while this phenomenon was reversed at auscultation pairs on the posterior chest wall in combined subjects, and similarly in both male and female subjects. Though the frequency corresponding to maximum power density of lung sounds (F MPD ) from the left and right lungs was not significantly different, the frequency that equally divided the power spectrum of lung sounds (F 50 ) from the left lung was significantly smaller than that from the right lung at auscultation site on the anterior and lateral chest walls, while it was significantly larger than that of from the right lung at auscultation site on the posterior chest walls. In conclusion, significant differences in the P T , F MPD , F 50 , and R I/E between the left and right lungs at some auscultation pairs were observed by using a dual-channel auscultation system in this study. Structural differences between the left and the right lungs, between the female and male subjects, and between anterior and posterior lungs might

  12. Left–Right Asymmetry in Spectral Characteristics of Lung Sounds Detected Using a Dual-Channel Auscultation System in Healthy Young Adults

    Science.gov (United States)

    Tsai, Jang-Zern; Chang, Ming-Lang; Yang, Jiun-Yue; Kuo, Dar; Lin, Ching-Hsiung; Kuo, Cheng-Deng

    2017-01-01

    Though lung sounds auscultation is important for the diagnosis and monitoring of lung diseases, the spectral characteristics of lung sounds have not been fully understood. This study compared the spectral characteristics of lung sounds between the right and left lungs and between healthy male and female subjects using a dual-channel auscultation system. Forty-two subjects aged 18–22 years without smoking habits and any known pulmonary diseases participated in this study. The lung sounds were recorded from seven pairs of auscultation sites on the chest wall simultaneously. We found that in four out of seven auscultation pairs, the lung sounds from the left lung had a higher total power (PT) than those from the right lung. The PT of male subjects was higher than that of female ones in most auscultation pairs. The ratio of inspiration power to expiration power (RI/E) of lung sounds from the right lung was greater than that from the left lung at auscultation pairs on the anterior chest wall, while this phenomenon was reversed at auscultation pairs on the posterior chest wall in combined subjects, and similarly in both male and female subjects. Though the frequency corresponding to maximum power density of lung sounds (FMPD) from the left and right lungs was not significantly different, the frequency that equally divided the power spectrum of lung sounds (F50) from the left lung was significantly smaller than that from the right lung at auscultation site on the anterior and lateral chest walls, while it was significantly larger than that of from the right lung at auscultation site on the posterior chest walls. In conclusion, significant differences in the PT, FMPD, F50, and RI/E between the left and right lungs at some auscultation pairs were observed by using a dual-channel auscultation system in this study. Structural differences between the left and the right lungs, between the female and male subjects, and between anterior and posterior lungs might account for the

  13. Left–Right Asymmetry in Spectral Characteristics of Lung Sounds Detected Using a Dual-Channel Auscultation System in Healthy Young Adults

    Directory of Open Access Journals (Sweden)

    Jang-Zern Tsai

    2017-06-01

    Full Text Available Though lung sounds auscultation is important for the diagnosis and monitoring of lung diseases, the spectral characteristics of lung sounds have not been fully understood. This study compared the spectral characteristics of lung sounds between the right and left lungs and between healthy male and female subjects using a dual-channel auscultation system. Forty-two subjects aged 18–22 years without smoking habits and any known pulmonary diseases participated in this study. The lung sounds were recorded from seven pairs of auscultation sites on the chest wall simultaneously. We found that in four out of seven auscultation pairs, the lung sounds from the left lung had a higher total power (PT than those from the right lung. The PT of male subjects was higher than that of female ones in most auscultation pairs. The ratio of inspiration power to expiration power (RI/E of lung sounds from the right lung was greater than that from the left lung at auscultation pairs on the anterior chest wall, while this phenomenon was reversed at auscultation pairs on the posterior chest wall in combined subjects, and similarly in both male and female subjects. Though the frequency corresponding to maximum power density of lung sounds (FMPD from the left and right lungs was not significantly different, the frequency that equally divided the power spectrum of lung sounds (F50 from the left lung was significantly smaller than that from the right lung at auscultation site on the anterior and lateral chest walls, while it was significantly larger than that of from the right lung at auscultation site on the posterior chest walls. In conclusion, significant differences in the PT, FMPD, F50, and RI/E between the left and right lungs at some auscultation pairs were observed by using a dual-channel auscultation system in this study. Structural differences between the left and the right lungs, between the female and male subjects, and between anterior and posterior lungs might

  14. The Sounds of the Little and Big Bangs

    Science.gov (United States)

    Shuryak, Edward

    2017-11-01

    Studies of heavy ion collisions have discovered that tiny fireballs of new phase of matter -- quark gluon plasma (QGP) -- undergoes explosion, called the Little Bang. In spite of its small size, it is not only well described by hydrodynamics, but even small perturbations on top of the explosion turned to be well described by hydrodynamical sound modes. The cosmological Big Bang also went through phase transitions, the QCD and electroweak ones, which are expected to produce sounds as well. We discuss their subsequent evolution and hypothetical inverse acoustic cascade, amplifying the amplitude. Ultimately, collision of two sound waves leads to formation of gravity waves, with the smallest wavelength. We briefly discuss how those can be detected.

  15. The sound and the fury--bees hiss when expecting danger.

    Directory of Open Access Journals (Sweden)

    Henja-Niniane Wehmann

    Full Text Available Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  16. The Sound and the Fury—Bees Hiss when Expecting Danger

    Science.gov (United States)

    Galizia, C. Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees’ sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees’ hissing remain to be investigated. PMID:25747702

  17. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  18. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  19. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  20. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  1. Impacts of distinct observations during the 2009 Prince William Sound field experiment: A data assimilation study

    Science.gov (United States)

    Li, Z.; Chao, Y.; Farrara, J.; McWilliams, J. C.

    2012-12-01

    A set of data assimilation experiments, known as Observing System Experiments (OSEs), are performed to assess the relative impacts of different types of observations acquired during the 2009 Prince William Sound Field Experiment. The observations assimilated consist primarily of three types: High Frequency (HF) radar surface velocities, vertical profiles of temperature/salinity (T/S) measured by ships, moorings, Autonomous Underwater Vehicles and gliders, and satellite sea surface temperatures (SSTs). The impact of all the observations, HF radar surface velocities, and T/S profiles is assessed. Without data assimilation, a frequently occurring cyclonic eddy in the central Sound is overly persistent and intense. The assimilation of the HF radar velocities effectively reduces these biases and improves the representation of the velocities as well as the T/S fields in the Sound. The assimilation of the T/S profiles improves the large scale representation of the temperature/salinity and also the velocity field in the central Sound. The combination of the HF radar surface velocities and sparse T/S profiles results in an observing system capable of representing the circulation in the Sound reliably and thus producing analyses and forecasts with useful skill. It is suggested that a potentially promising observing network could be based on satellite SSHs and SSTs along with sparse T/S profiles, and future satellite SSHs with wide swath coverage and higher resolution may offer excellent data that will be of great use for predicting the circulation in the Sound.

  2. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  3. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  4. Characterization of Underwater Sounds Produced by Trailing Suction Hopper Dredges During Sand Mining and Pump-out Operations

    Science.gov (United States)

    2014-03-01

    machinery itself, such as winches, generators, thrusters and particularly propeller-induced cavitation ; and 5) sounds associated with the off-loading of...dredges were working concurrently. This is not surprising, given that cavitation (propeller noise) contributed the most to the overall sound field. If...in Cook Inlet, Alaska (an area known for high hydrodynamic flow conditions). Their RLs ranged from 95- 120 dB at eight locations. Highest RLs were

  5. Influence of visual stimuli on the sound quality evaluation of loudspeaker systems

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros; Christensen, Flemming

    Product sound quality evaluation aims to identify relevant attributes and assess their influence on the overall auditory impression. Extending this sound specific rationale, the present study evaluates overall impression in relation to hearing and vision, specifically for loudspeakers. In order...... to quantify the bias that the image of a loudspeaker has on the sound quality evaluation of a naive listening panel, loudspeaker sounds of varied degradation are coupled with positively or negatively biasing visual input of actual loudspeakers, and in a separate experiment by pictures of the same loudspeakers....

  6. Directional sound beam emission from a configurable compact multi-source system

    KAUST Repository

    Zhao, Jiajun; Jadhali, Rasha Al; Zhang, Likun; Wu, Ying

    2018-01-01

    We propose to achieve efficient emission of highly directional sound beams from multiple monopole sources embedded in a subwavelength enclosure. Without the enclosure, the emitted sound fields have an indistinguishable or omnidirectional radiation

  7. Sounds energetic: the radio producer's energy minibook

    Energy Technology Data Exchange (ETDEWEB)

    1980-12-01

    The Minibook will be expanded into the final Radio Producer's Energy Sourcebook. Radio producers and broadcasters are asked to contribute ideas for presenting energy knowledge to the public and to be included in the Sourcebook. Chapter One presents a case study suggesting programming and promotion ideas and sample scripts for a radio campaign that revolves around no-cost or low-cost steps listeners can take to increase their home energy efficiency and save money. A variety of other energy topics and suggestions on ways to approach them are addressed in Chapter Two. Chapter Three contains energy directories for Baltimore, Philadelphia, Pittsburg, and Washington, DC. The directories will be expanded in the Sourcebook and will consist of a selection of local public and private sector energy-related organizations and list local experts and organizations and the best Federal, state, and local government programs that can provide consumers and citizens groups with information, technical assistance, and financial support. (MCW)

  8. Two Shared Rapid Turn Taking Sound Interfaces for Novices

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie; Andersen, Hans Jørgen; Raudaskoski, Pirkko Liisa

    2012-01-01

    in an interleaved fashion: Systems A and B used a fuzzy logic algorithm and pattern recognition to respond with modifications of a background rhythms. In an experiment with a pen tablet interface as the music instrument, users aged 10-13 were to tap tones and continue each other’s melody. The sound systems rewarded......This paper presents the results of user interaction with two explorative music environments (sound system A and B) that were inspired from the Banda Linda music tradition in two different ways. The sound systems adapted to how a team of two players improvised and made a melody together...... users sonically, if they managed to add tones to their mutual melody in a rapid turn taking manner with rhythmical patterns. Videos of experiment sessions show that user teams contributed to a melody in ways that resemble conversation. Interaction data show that each sound system made player teams play...

  9. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  10. The Sounds of the Little and Big Bangs

    Directory of Open Access Journals (Sweden)

    Edward Shuryak

    2017-11-01

    Full Text Available Studies on heavy ion collisions have discovered that tiny fireballs of a new phase of matter—quark gluon plasma (QGP—undergo an explosion, called the Little Bang. In spite of its small size, not only is it well described by hydrodynamics, but even small perturbations on top of the explosion turned out to be well described by hydrodynamical sound modes. The cosmological Big Bang also went through phase transitions, related with Quantum Chromodynamics (QCD and electroweak/Higgs symmetry breaking, which are also expected to produce sounds. We discuss their subsequent evolution and hypothetical inverse acoustic cascade, amplifying the amplitude. Ultimately, the collision of two sound waves leads to the formation of one gravity waves. We briefly discuss how these gravity waves can be detected.

  11. Sound absorption study on acoustic panel from kapok fiber and egg tray

    Science.gov (United States)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  12. Combined multibeam and LIDAR bathymetry data from eastern Long Island Sound and westernmost Block Island Sound-A regional perspective

    Science.gov (United States)

    Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Doran, E.F.

    2011-01-01

    Detailed bathymetric maps of the sea floor in Long Island Sound are of great interest to the Connecticut and New York research and management communities because of this estuary's ecological, recreational, and commercial importance. The completed, geologically interpreted digital terrain models (DTMs), ranging in area from 12 to 293 square kilometers, provide important benthic environmental information, yet many applications require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 12 multibeam and 2 LIDAR (Light Detection and Ranging) contiguous bathymetric DTMs, produced by the National Oceanic and Atmospheric Administration during charting operations, into one dataset that covers much of eastern Long Island Sound and extends into westernmost Block Island Sound. The new dataset is adjusted to mean lower low water, is gridded to 4-meter resolution, and is provided in UTM Zone 18 NAD83 and geographic WGS84 projections. This resolution is adequate for sea floor-feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the grid include exposed bedrock outcrops, boulder lag deposits of submerged moraines, sand-wave fields, and scour depressions that reflect the strength of the oscillating and asymmetric tidal currents. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic artifacts visible in the bathymetric data include a dredged channel, shipwrecks, dredge spoils, mooring anchors, prop-scour depressions, buried cables, and bridge footings. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental

  13. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    Science.gov (United States)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  14. An adaptive, data driven sound field control strategy for outdoor concerts

    DEFF Research Database (Denmark)

    Heuchel, Franz Maria; Caviedes Nozal, Diego; Brunskog, Jonas

    2017-01-01

    One challenge of outdoor concerts is to ensure adequate levels for the audience while avoiding disturbance of the surroundings. We outline the initial concept of a sound field control (SFC) system for tackling this issue using sound-zoning. The system uses Bayesian inference to update a sound...

  15. Achievement report for fiscal 1998. Multimedia system for the disabled; 1998 nendo seika hokokusho. Shogaisha taio multimedia system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-05-01

    Development has been made on the element technologies for a multimedia system for the disabled. In developing the non-visual graphical user interface (GUI) access system, a non-visual access system to enable the visually disabled to access a GUI system by utilizing sound and tactile devices was developed. In developing the three-dimensional sound utilizing information providing system, a system was developed, in which information on screen arrangement of the GUI in a personal computer is presented as spatial sound positions by using the three-dimensional sound control technology. The system can be operated non-visually using a mouse by controlling movements of a cursor. The current fiscal year has improved three-dimensional sound producing devices, three-dimensional sound interface, and how to handle their applications. User evaluations were also performed. Functions were expanded and improved in a system and an optical media reader that enable the visually disabled person to read printed papers by himself. Development was made also on a function to link the system with a total system. (NEDO)

  16. Metrics for Polyphonic Sound Event Detection

    Directory of Open Access Journals (Sweden)

    Annamaria Mesaros

    2016-05-01

    Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

  17. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  18. What Types of Policies Are Required for a Constitutionally Sound, Efficient Educational System of Common Schools?

    Science.gov (United States)

    La Brecque, Richard

    This paper clarifies core concepts in a Kentucky judge's decision that the State General Assembly has failed to provide an efficient system of common schools. Connecting "efficiency" of educational systems to "equality of educational opportunity," the paper argues that the realization of a constitutionally sound, efficient…

  19. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  20. The relationship between target quality and interference in sound zones

    DEFF Research Database (Denmark)

    Baykaner, Khan; Coleman, Phillip; Mason, Russell

    2015-01-01

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced...... audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity...

  1. Puget Sound area electric reliability plan. Draft environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power & Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound`s power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  2. Virtual Reality System with Integrated Sound Field Simulation and Reproduction

    Directory of Open Access Journals (Sweden)

    Ingo Assenmacher

    2007-01-01

    Full Text Available A real-time audio rendering system is introduced which combines a full room-specific simulation, dynamic crosstalk cancellation, and multitrack binaural synthesis for virtual acoustical imaging. The system is applicable for any room shape (normal, long, flat, coupled, independent of the a priori assumption of a diffuse sound field. This provides the possibility of simulating indoor or outdoor spatially distributed, freely movable sources and a moving listener in virtual environments. In addition to that, near-to-head sources can be simulated by using measured near-field HRTFs. The reproduction component consists of a headphone-free reproduction by dynamic crosstalk cancellation. The focus of the project is mainly on the integration and interaction of all involved subsystems. It is demonstrated that the system is capable of real-time room simulation and reproduction and, thus, can be used as a reliable platform for further research on VR applications.

  3. Sound Transduction in the Auditory System of Bushcrickets

    Science.gov (United States)

    Nowotny, Manuela; Udayashankar, Arun Palghat; Weber, Melanie; Hummel, Jennifer; Kössl, Manfred

    2011-11-01

    Place based frequency representation, called tonotopy,is a typical property of hearing organs for the discrimination of different frequencies. Due to its coiled structure and secure housing, it is difficult access the mammalian cochlea. Hence, our knowledge about in vivo inner-ear mechanics is restricted to small regions. In this study, we present in vivo measurements that focus on the easily accessible, uncoiled auditory organs in bushcrickets, which are located in their foreleg tibiae. Sound enters the body via an opening at the lateral side of the thorax and passes through a horn-shaped acoustic trachea before reaching the high frequency hearing organ called crista acustica. In addition to the acoustic trachea as structure that transmits incoming sound towards the hearing organ, bushcrickets also possess two tympana, specialized plate-like structures, on the anterior and posterior side of each tibia. They provide a secondary path of excitation for the sensory receptors at low frequencies. We investigated the mechanics of the crista acustica in the tropical bushcricket Mecopoda elongata. The frequency-dependent motion of the crista acustica was captured using a laser-Doppler-vibrometer system. Using pure tone stimulation of the crista acustica, we could elicit traveling waves along the length of the hearing organ that move from the distal high frequency to the proximal low frequency region. In addition, distinct maxima in the velocity response of the crista acustica could be measured at ˜7 and ˜17 kHz. The travelling-wave-based tonotopy provides the basis for mechanical frequency discrimination along the crista acustica and opens up new possibility to investigate traveling wave mechanics in vivo.

  4. Investigating the amplitude of interactive footstep sounds and soundscape reproduction

    DEFF Research Database (Denmark)

    Turchet, Luca; Serafin, Stefania

    2013-01-01

    In this paper, we study the perception of amplitude of soundscapes and interactively generated footstep sounds provided both through headphones and a surround sound system. In particular, we investigate whether there exists a value for the amplitude of soundscapes and footstep sounds which...... of soundscapes does not significantly affect the selected amplitude of footstep sounds. Similarly, the perception of the soundscapes amplitude is not significantly affected by the selected amplitude of footstep sounds....

  5. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  6. Learning about the Dynamic Sun through Sounds

    Science.gov (United States)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  7. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    Science.gov (United States)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line

  8. Application of Carbon Nanotube Assemblies for Sound Generation and Heat Dissipation

    Science.gov (United States)

    Kozlov, Mikhail; Haines, Carter; Oh, Jiyoung; Lima, Marcio; Fang, Shaoli

    2011-03-01

    Nanotech approaches were explored for the efficient transformation of an electrical signal into sound, heat, cooling action, and mechanical strain. The studies are based on the aligned arrays of multi-walled carbon nanotubes (MWNT forests) that can be grown on various substrates using a conventional CVD technique. They form a three-dimensional conductive network that possesses uncommon electrical, thermal, acoustic and mechanical properties. When heated with an alternating current or a near-IR laser modulated in 0.01--20 kHz range, the nanotube forests produce loud, audible sound. High generated sound pressure and broad frequency response (beyond 20 kHz) show that the forests act as efficient thermo-acoustic (TA) transducers. They can generate intense third and fourth TA harmonics that reveal peculiar interference-like patterns from ac-dc voltage scans. A strong dependence of the patterns on forest height can be used for characterization of carbon nanotube assemblies and for evaluation of properties of thermal interfaces. Because of good coupling with surrounding air, the forests provide excellent dissipation of heat produced by IC chips. Thermoacoustic converters based on forests can be used for thermo- and photo-acoustic sound generation, amplification and noise cancellation.

  9. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  10. Machine learning analysis of binaural rowing sounds

    DEFF Research Database (Denmark)

    Johard, Leonard; Ruffaldi, Emanuele; Hoffmann, Pablo F.

    2011-01-01

    Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition metho...... methodology and the evaluation of different machine learning techniques for classifying rowing-sound data. We see that a combination of principal component analysis and shallow networks perform equally well as deep architectures, while being much faster to train.......Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition...

  11. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  12. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  13. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  14. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  15. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  16. Decoding the neural signatures of emotions expressed through sound.

    Science.gov (United States)

    Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T

    2018-03-01

    Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Beyond the Staff: “Alternative” Systems in the Graphical Representation of Organized Sound

    Directory of Open Access Journals (Sweden)

    Enrique Cámara de Landa

    2016-02-01

    Full Text Available In this article, a reflection on the limits of the staff in the representation of organized sound is briefly presented, followed by the consideration of the proposals that some ethnomusicologists have developed to highlight particular aspects of music. Some antecedents are provided, such as the synoptic transcription (Constantin Brăiloiu and the paradigmatic transcription (Nicolas Ruwet. Other proposals will be discussed, like the graphical representation of musical structure (Bernard Lortat-Jacob, Hugo Zemp or the use of spectrograms (Charles Seeger, Mireille Hellfer, Lortat-Jacob, Grazia Tuzi, graphic devices (Charles Adams, musemes (Philip Tagg, sonograms (Enrique Cámara, frame by frame musical transcription (Gerhard Kubik, and local systems of notation. According to these proposals, the graphical representation of music beyond the staff maintains its efficiency in current ethnomusicology (with different objectives and even different targets. Moreover, I will argue that it is necessary to take into consideration the place occupied by the use of these tools in the tensions and interactions between etic and emic perspectives, and the need to reconcile the internal consistency required for any system of visual representation of sound, with the need to make permanently flexible proposals based on intercultural dialogue.

  18. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  19. Interactive Sonification of Spontaneous Movement of Children - Cross-modal Mapping and the Perception of Body Movement Qualities through Sound

    Directory of Open Access Journals (Sweden)

    Emma Frid

    2016-11-01

    Full Text Available In this paper we present three studies focusing on the effect of different sound models ininteractive sonification of bodily movement. We hypothesized that a sound model characterizedby continuous smooth sounds would be associated with other movement characteristics thana model characterized by abrupt variation in amplitude and that these associations could bereflected in spontaneous movement characteristics. Three subsequent studies were conductedto investigate the relationship between properties of bodily movement and sound: (1 a motioncapture experiment involving interactive sonification of a group of children spontaneously movingin a room, (2 an experiment involving perceptual ratings of sonified movement data and (3an experiment involving matching between sonified movements and their visualizations in theform of abstract drawings. In (1 we used a system constituting of 17 IR cameras trackingpassive reflective markers. The head positions in the horizontal plane of 3-4 children weresimultaneously tracked and sonified, producing 3-4 sound sources spatially displayed throughan 8-channel loudspeaker system. We analyzed children’s spontaneous movement in termsof energy-, smoothness- and directness index. Despite large inter-participant variability andgroup-specific effects caused by interaction among children when engaging in the spontaneousmovement task, we found a small but significant effect of sound model. Results from (2 indicatethat different sound models can be rated differently on a set of motion-related perceptual scales(e.g. expressivity and fluidity. Also, results imply that audio-only stimuli can evoke strongerperceived properties of movement (e.g. energetic, impulsive than stimuli involving both audioand video representations. Findings in (3 suggest that sounds portraying bodily movementcan be represented using abstract drawings in a meaningful way. We argue that the resultsfrom these studies support the existence of a cross

  20. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    Science.gov (United States)

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  1. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus and Humans with Transformations in Amplitude, Duration and Frequency.

    Directory of Open Access Journals (Sweden)

    Brian K Branstetter

    Full Text Available Bottlenose dolphins (Tursiops truncatus use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A in response to a specific sound (sound A for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  2. Why Do People Like Loud Sound? A Qualitative Study.

    Science.gov (United States)

    Welch, David; Fremaux, Guy

    2017-08-11

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.

  3. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  4. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    Science.gov (United States)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  5. Sound transmission through pipe systems and into plate structures in buildings. A simplified sea model

    NARCIS (Netherlands)

    Bron-van der Jagt, G.S.

    2007-01-01

    In the study presented, it has been investigated whether Statistical Energy Analysis (SEA) could be applied in a simplified way as a framework for a prediction model regarding noise due to service equipment, specifically sound transmission within (plastic wastewater) pipe systems and between these

  6. Hybrid waste filler filled bio-polymer foam composites for sound absorbent materials

    Science.gov (United States)

    Rus, Anika Zafiah M.; Azahari, M. Shafiq M.; Kormin, Shaharuddin; Soon, Leong Bong; Zaliran, M. Taufiq; Ahraz Sadrina M. F., L.

    2017-09-01

    Sound absorption materials are one of the major requirements in many industries with regards to the sound insulation developed should be efficient to reduce sound. This is also important to contribute in economically ways of producing sound absorbing materials which is cheaper and user friendly. Thus, in this research, the sound absorbent properties of bio-polymer foam filled with hybrid fillers of wood dust and waste tire rubber has been investigated. Waste cooking oil from crisp industries was converted into bio-monomer, filled with different proportion ratio of fillers and fabricated into bio-polymer foam composite. Two fabrication methods is applied which is the Close Mold Method (CMM) and Open Mold Method (OMM). A total of four bio-polymer foam composite samples were produce for each method used. The percentage of hybrid fillers; mixture of wood dust and waste tire rubber of 2.5 %, 5.0%, 7.5% and 10% weight to weight ration with bio-monomer. The sound absorption of the bio-polymer foam composites samples were tested by using the impedance tube test according to the ASTM E-1050 and Scanning Electron Microscope to determine the morphology and porosity of the samples. The sound absorption coefficient (α) at different frequency range revealed that the polymer foam of 10.0 % hybrid fillers shows highest α of 0.963. The highest hybrid filler loading contributing to smallest pore sizes but highest interconnected pores. This also revealed that when highly porous material is exposed to incident sound waves, the air molecules at the surface of the material and within the pores of the material are forced to vibrate and loses some of their original energy. This is concluded that the suitability of bio-polymer foam filled with hybrid fillers to be used in acoustic application of automotive components such as dashboards, door panels, cushion and etc.

  7. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  8. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Science.gov (United States)

    2010-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  9. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  10. Low-frequency sound affects active micromechanics in the human inner ear

    Science.gov (United States)

    Kugler, Kathrin; Wiegrebe, Lutz; Grothe, Benedikt; Kössl, Manfred; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2014-01-01

    Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing. PMID:26064536

  11. Puget Sound Area Electric Reliability Plan : Draft Environmental Impact State.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound's power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  12. Sound production to electric discharge: sonic muscle evolution in progress in Synodontis spp. catfishes (Mochokidae).

    Science.gov (United States)

    Boyle, Kelly S; Colleye, Orphal; Parmentier, Eric

    2014-09-22

    Elucidating the origins of complex biological structures has been one of the major challenges of evolutionary studies. Within vertebrates, the capacity to produce regular coordinated electric organ discharges (EODs) has evolved independently in different fish lineages. Intermediate stages, however, are not known. We show that, within a single catfish genus, some species are able to produce sounds, electric discharges or both signals (though not simultaneously). We highlight that both acoustic and electric communication result from actions of the same muscle. In parallel to their abilities, the studied species show different degrees of myofibril development in the sonic and electric muscle. The lowest myofibril density was observed in Synodontis nigriventris, which produced EODs but no swim bladder sounds, whereas the greatest myofibril density was observed in Synodontis grandiops, the species that produced the longest sound trains but did not emit EODs. Additionally, S. grandiops exhibited the lowest auditory thresholds. Swim bladder sounds were similar among species, while EODs were distinctive at the species level. We hypothesize that communication with conspecifics favoured the development of species-specific EOD signals and suggest an evolutionary explanation for the transition from a fast sonic muscle to electrocytes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. Sound classification of dwellings in the Nordic countries

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Turunen-Rise, Iiris

    1997-01-01

    be met. The classification system is based on limit values for airborne sound insulation, impact sound pressure level, reverberation time and indoor and outdoor noise levels. The purpose of the standard is to offer a tool for specification of a standardised acoustic climate and to promote constructors......A draft standard INSTA 122:1997 on sound classification of dwellings is for voting as a common national standard in the Nordic countries (Denmark, Norway, Sweden, Finland, Iceland) and in Estonia. The draft standard specifies a sound classification system with four classes A, B, C and D, where...... class C is proposed as the future minimum requirements for new dwellings. The classes B and A define criteria for dwellings with improved or very good acoustic conditions, whereas class D may be used for older, renovated dwellings in which the acoustic quality level of a new dwelling cannot reasonably...

  14. Development of the Database for Environmental Sound Research and Application (DESRA: Design, Functionality, and Retrieval Considerations

    Directory of Open Access Journals (Sweden)

    Brian Gygi

    2010-01-01

    Full Text Available Theoretical and applied environmental sounds research is gaining prominence but progress has been hampered by the lack of a comprehensive, high quality, accessible database of environmental sounds. An ongoing project to develop such a resource is described, which is based upon experimental evidence as to the way we listen to sounds in the world. The database will include a large number of sounds produced by different sound sources, with a thorough background for each sound file, including experimentally obtained perceptual data. In this way DESRA can contain a wide variety of acoustic, contextual, semantic, and behavioral information related to an individual sound. It will be accessible on the Internet and will be useful to researchers, engineers, sound designers, and musicians.

  15. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  16. Ultrahromatizm as a Sound Meditation

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-08-01

    Full Text Available The article scientifically substantiates the insights on the theory and the practice of using microchromatic in modern musical art, defines compositional and expressive possibilities of microtonal system in the works of composers of XXI century. It justifies the author's interpretation of the concept of “ultrahromatizm”, as a principle of musical thinking, which is connected with the sound space conception as the space-time continuum. The paper identifies the correlation of the notions “microchromatism” and “ultrahromatizm”. If microchromosome is understood, first and for most, as the technique of dividing the sound into microparticles, ultrahromatizm is interpreted as the principle of musical and artistic consciousness, as the musical focus of consciousness on the formation of the specific model of sound meditation and understanding of the world.

  17. Effect of a sound wave on the stability of an argon discharge

    International Nuclear Information System (INIS)

    Galechyan, G.A.; Karapetyan, D.M.; Tavakalyan, L.B.

    1992-01-01

    The effect of a sound wave on the stability of the positive column of an argon discharge has been studied experimentally in the range of pressures from 40 to 180 torr and discharge currents from 40 to 110 mA in a tube with an interior diameter of 9.8 cm. It is shown that, depending on the intensity of the sound wave and the discharge parameters, sound can cause the positive column either to contract or to leave the contracted state. The electric field strength has been measured as a function of the sound intensity. An analogy between the effect of sound and that of longitudinal pumping of the gas on the argon discharge parameters has been established. The radial temperature of the gas has been studied in an argon discharge as a function of the sound intensity for different gas pressures. A direct relationship has been established between the sign of the detector effect produced by a sound wave in a discharge and the processes of contraction and filamentation of a discharge. 11 refs., 4 figs., 1 tab

  18. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  19. Different Types of Sounds and Their Relationship With the Electrocardiographic Signals and the Cardiovascular System – Review

    Directory of Open Access Journals (Sweden)

    Ennio H. Idrobo-Ávila

    2018-05-01

    Full Text Available Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals.Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them.Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise.Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could

  20. Influence of visual stimuli in the sound quality evaluation of loudspeaker systems

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros; Christensen, Flemming

    2006-01-01

    There is currently an increasing demand to evaluate sound quality attributes of products and understand to what extend they influence a user's overall impression, since there is usually more than one modality stimulating this evaluation. The present study uses the loudspeakers as an example...... and evaluates the overall impression in relation to hearing and vision.In order to quantify the bias that the image of a loudspeaker has on the sound quality evaluation done by a naive listening panel, loudspeaker sounds of varied degradation are coupled with positively or negatively biasing visual input...

  1. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  2. Microscopic theory of longitudinal sound velocity in CDW and SDW ordered cuprate systems

    International Nuclear Information System (INIS)

    Rout, G.C.; Panda, S.K.

    2011-01-01

    Research highlights: → Reported the study of the interplay of the CDW and SDW interactions in the high-Tc cuprates. → The longitudinal velocity of sound is studied in the under-doped region. → The velocity of sound exhibits suppression in both the CDW and SDW phases. → Strong electron-phonon interaction is observed in normal phases. - Abstract: We address here the self-consistent calculation of the spin density wave and the charge density wave gap parameters for high-T c cuprates on the basis of the Hubbard model. In order to describe the experimental observations for the velocity of sound, we consider the phonon coupling to the conduction band in the harmonic approximation and then the expression for the temperature dependent velocity of sound is calculated from the real part of the phonon Green's function. The effects of the electron-phonon coupling, the frequency of the sound wave, the hole doping concentration, the CDW coupling and the SDW coupling parameters on the sound velocity are investigated in the pure CDW phase as well as in the co-existence phase of the CDW and SDW states. The results are discussed to explain the experimental observations.

  3. Microscopic theory of longitudinal sound velocity in CDW and SDW ordered cuprate systems

    Energy Technology Data Exchange (ETDEWEB)

    Rout, G.C., E-mail: gcr@iopb.res.i [Condensed Matter Physics Group, PG Dept. of Applied Physics and Ballistics, FM University, Balasore 756 019 (India); Panda, S K [KD Science College, Pochilima, Hinjilicut 761 101, Ganjam, Orissa (India)

    2011-02-15

    Research highlights: {yields} Reported the study of the interplay of the CDW and SDW interactions in the high-Tc cuprates. {yields} The longitudinal velocity of sound is studied in the under-doped region. {yields} The velocity of sound exhibits suppression in both the CDW and SDW phases. {yields} Strong electron-phonon interaction is observed in normal phases. - Abstract: We address here the self-consistent calculation of the spin density wave and the charge density wave gap parameters for high-T{sub c} cuprates on the basis of the Hubbard model. In order to describe the experimental observations for the velocity of sound, we consider the phonon coupling to the conduction band in the harmonic approximation and then the expression for the temperature dependent velocity of sound is calculated from the real part of the phonon Green's function. The effects of the electron-phonon coupling, the frequency of the sound wave, the hole doping concentration, the CDW coupling and the SDW coupling parameters on the sound velocity are investigated in the pure CDW phase as well as in the co-existence phase of the CDW and SDW states. The results are discussed to explain the experimental observations.

  4. Microscopic theory of longitudinal sound velocity in CDW and SDW ordered cuprate systems

    Energy Technology Data Exchange (ETDEWEB)

    Rout, G.C., E-mail: gcr@iopb.res.i [Condensed Matter Physics Group, PG Dept. of Applied Physics and Ballistics, FM University, Balasore 756 019 (India); Panda, S.K. [KD Science College, Pochilima, Hinjilicut 761 101, Ganjam, Orissa (India)

    2011-02-15

    Research highlights: {yields} Reported the study of the interplay of the CDW and SDW interactions in the high-Tc cuprates. {yields} The longitudinal velocity of sound is studied in the under-doped region. {yields} The velocity of sound exhibits suppression in both the CDW and SDW phases. {yields} Strong electron-phonon interaction is observed in normal phases. - Abstract: We address here the self-consistent calculation of the spin density wave and the charge density wave gap parameters for high-T{sub c} cuprates on the basis of the Hubbard model. In order to describe the experimental observations for the velocity of sound, we consider the phonon coupling to the conduction band in the harmonic approximation and then the expression for the temperature dependent velocity of sound is calculated from the real part of the phonon Green's function. The effects of the electron-phonon coupling, the frequency of the sound wave, the hole doping concentration, the CDW coupling and the SDW coupling parameters on the sound velocity are investigated in the pure CDW phase as well as in the co-existence phase of the CDW and SDW states. The results are discussed to explain the experimental observations.

  5. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  6. The effect of frequency-specific sound signals on the germination of maize seeds.

    Science.gov (United States)

    Vicient, Carlos M

    2017-07-25

    The effects of sound treatments on the germination of maize seeds were determined. White noise and bass sounds (300 Hz) had a positive effect on the germination rate. Only 3 h treatment produced an increase of about 8%, and 5 h increased germination in about 10%. Fast-green staining shows that at least part of the effects of sound are due to a physical alteration in the integrity of the pericarp, increasing the porosity of the pericarp and facilitating oxygen availability and water and oxygen uptake. Accordingly, by removing the pericarp from the seeds the positive effect of the sound on the germination disappeared.

  7. Analysis of the drilling sound in maxillo-facial surgery

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Gosselin, Florian; Taha, Farid

    2009-01-01

    Auditory feedback can have a great potential in surgical simulators that aim at training skills associated to the correct interpretation of acoustic information. Here, we present a preliminary analysis of the sound that is produced by the drilling procedure in a maxillo-facial surgery when...... performed by expert surgeons. The motivation of this work is to find relevant acoustic parameters that allow for an efficient synthesis method of the drilling sound and to set the basis of the audio component in the simulator so that expert surgical drilling can effectively be conveyed to users...

  8. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  9. Physically based sound synthesis and control of jumping sounds on an elastic trampoline

    DEFF Research Database (Denmark)

    Turchet, Luca; Pugliese, Roberto; Takala, Tapio

    2013-01-01

    This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine was contr......This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine...... was controlled in real-time by pro- cessing the signal captured by a contact microphone which was attached to the membrane of the trampoline in order to detect each jump. A user study was conducted to evaluate the quality of the in- teractive sonification. Results proved the success of the proposed algorithms...

  10. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  11. Sensitivity of the regional ocean acidification and carbonate system in Puget Sound to ocean and freshwater inputs

    Directory of Open Access Journals (Sweden)

    Laura Bianucci

    2018-03-01

    Full Text Available While ocean acidification was first investigated as a global phenomenon, coastal acidification has received significant attention in recent years, as its impacts have been felt by different socio-economic sectors (e.g., high mortality of shellfish larvae in aquaculture farms. As a region that connects land and ocean, the Salish Sea (consisting of Puget Sound and the Straits of Juan de Fuca and Georgia receives inputs from many different sources (rivers, wastewater treatment plants, industrial waste treatment facilities, etc., making these coastal waters vulnerable to acidification. Moreover, the lowering of pH in the Northeast Pacific Ocean also affects the Salish Sea, as more acidic waters get transported into the bottom waters of the straits and estuaries. Here, we use a numerical ocean model of the Salish Sea to improve our understanding of the carbonate system in Puget Sound; in particular, we studied the sensitivity of carbonate variables (e.g., dissolved inorganic carbon, total alkalinity, pH, saturation state of aragonite to ocean and freshwater inputs. The model is an updated version of our FVCOM-ICM framework, with new carbonate-system and sediment modules. Sensitivity experiments altering concentrations at the open boundaries and freshwater sources indicate that not only ocean conditions entering the Strait of Juan de Fuca, but also the dilution of carbonate variables by freshwater sources, are key drivers of the carbonate system in Puget Sound.

  12. Modular and Adaptive Control of Sound Processing

    Science.gov (United States)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis

  13. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  14. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  15. THE SOUNDNESS OF THE BANKING SYSTEM DURING THE GLOBAL FINANCIAL CRISIS

    Directory of Open Access Journals (Sweden)

    Ioana-Iuliana TOMULEASA

    2014-04-01

    Full Text Available The economic world is currently under the sign of profound changes, determined, in a significant extent, by the mutations in financial markets, the regulatory and institutional changes, illustrating its powerful impact on the financial system actors. The paper’s main purpose is to provide a comparative analysis of the performance and efficiency of commercial banks in seven European Union countries and an empirical analysis regarding the soundness of the Romanian banking system. The analysis undertaken in the paper highlights the need for banks to apply essential adjustments in their activity, such as the orientation to a new banking model, or the gearing to the latest regulations and tighter conditions of supervision on the financial sector. There were pointed out a series of issues which captured the overwhelming implications of the global financial crisis on the “health” of the financial system in EU, noticing the need for further measures that have as a main goal the avoidance of a financial system collapse.

  16. Physiological phenotyping of dementias using emotional sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  17. Sound reduction by metamaterial-based acoustic enclosure

    Directory of Open Access Journals (Sweden)

    Shanshan Yao

    2014-12-01

    Full Text Available In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of the source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.

  18. Sound reduction by metamaterial-based acoustic enclosure

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Shanshan; Li, Pei; Zhou, Xiaoming; Hu, Gengkai, E-mail: hugeng@bit.edu.cn [Key Laboratory of Dynamics and Control of Flight Vehicle, Ministry of Education and School of Aerospace Engineering, Beijing Institute of Technology, Beijing 100081 (China)

    2014-12-15

    In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of the source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.

  19. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  20. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  1. Tinnitus (Phantom Sound: Risk coming for future

    Directory of Open Access Journals (Sweden)

    Suresh Rewar

    2015-01-01

    Full Text Available The word 'tinnitus' comes from the Latin word tinnire, meaning “to ring” or “a ringing.” Tinnitus is the cognition of sound in the absence of any corresponding external sound. Tinnitus can take the form of continuous buzzing, hissing, or ringing, or a combination of these or other characteristics. Tinnitus affects 10% to 25% of the adult population. Tinnitus is classified as objective and subjective categories. Subjective tinnitus is meaningless sounds that are not associated with a physical sound and only the person who has the tinnitus can hear it. Objective tinnitus is the result of a sound that can be heard by the physician. Tinnitus is not a disease in itself but a common symptom, and because it involves the perception of sound or sounds, it is commonly associated with the hearing system. In fact, various parts of the hearing system, including the inner ear, are often responsible for this symptom. Tinnitus patients, which can lead to sleep disturbances, concentration problems, fatigue, depression, anxiety disorders, and sometimes even to suicide. The evaluation of tinnitus always begins with a thorough history and physical examination, with further testing performed when indicated. Diagnostic testing should include audiography, speech discrimination testing, computed tomography angiography, or magnetic resonance angiography should be performed. All patients with tinnitus can benefit from patient education and preventive measures, and oftentimes the physician's reassurance and assistance with the psychologic aftereffects of tinnitus can be the therapy most valuable to the patient. There are no specific medications for the treatment of tinnitus. Sedatives and some other medications may prove helpful in the early stages. The ultimate goal of neuro-imaging is to identify subtypes of tinnitus in order to better inform treatment strategies.

  2. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  3. Detection System of Sound Noise Level (SNL) Based on Condenser Microphone Sensor

    Science.gov (United States)

    Rajagukguk, Juniastel; Eka Sari, Nurdieni

    2018-03-01

    The research aims to know the noise level by using the Arduino Uno as data processing input from sensors and called as Sound Noise Level (SNL). The working principle of the instrument is as noise detector with the show notifications the noise level on the LCD indicator and in the audiovisual form. Noise detection using the sensor is a condenser microphone and LM 567 as IC op-amps, which are assembled so that it can detect the noise, which sounds are captured by the sensor will turn the tide of sinusoida voice became sine wave energy electricity (altering sinusoida electric current) that is able to responded to complaints by the Arduino Uno. The tool is equipped with a detector consists of a set indicator LED and sound well as the notification from the text on LCD 16*2. Work setting indicators on the condition that, if the measured noise > 75 dB then sound will beep, the red LED will light up indicating the status of the danger. If the measured value on the LCD is higher than 56 dB, sound indicator will be beep and yellow LED will be on indicating noisy. If the noise measured value <55 dB, sound indicator will be quiet indicating peaceful from noisy. From the result of the research can be explained that the SNL is capable to detecting and displaying noise level with a measuring range 50-100 dB and capable to delivering the notification noise in audiovisual.

  4. Thump, ring: the sound of a bouncing ball

    International Nuclear Information System (INIS)

    Katz, J I

    2010-01-01

    A basketball bounced on a stiff surface produces a characteristic loud thump, followed by a high-pitched ringing. Describing the ball as an inextensible but flexible membrane containing compressed air, I formulate an approximate theory of the generation of these sounds and predict their amplitudes and waveforms.

  5. Thump, ring: the sound of a bouncing ball

    Energy Technology Data Exchange (ETDEWEB)

    Katz, J I, E-mail: katz@wuphys.wustl.ed [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St Louis, MO 63130 (United States)

    2010-07-15

    A basketball bounced on a stiff surface produces a characteristic loud thump, followed by a high-pitched ringing. Describing the ball as an inextensible but flexible membrane containing compressed air, I formulate an approximate theory of the generation of these sounds and predict their amplitudes and waveforms.

  6. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  7. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  8. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  9. Sound pressure gain produced by the human middle ear.

    Science.gov (United States)

    Kurokawa, H; Goode, R L

    1995-10-01

    The acoustic function of the middle ear is to match sound passing from the low impedance of air to the high impedance of cochlear fluid. Little information is available on the actual middle ear pressure gain in human beings. This article describes experiments on middle ear pressure gain in six fresh human temporal bones. Stapes footplate displacement and phase were measured with a laser Doppler vibrometer before and after removal of the tympanic membrane, malleus, and incus. Acoustic insulation of the round window with clay was performed. Umbo displacement was also measured before tympanic membrane removal to assess baseline tympanic membrane function. The middle ear has its major gain in the lower frequencies, with a peak near 0.9 kHz. The mean gain was 23.0 dB below 1.0 kHz, the resonant frequency of the middle ear; the mean peak gain was 26.6 dB. Above 1.0 kHz, the second pressure gain decreased at a rate of -8.6 dB/octave, with a mean gain of 6.5 dB at 4.0 kHz. Only a small amount of gain was present above 7.0 kHz. Significant individual differences in pressure gain were found between ears that appeared related to variations in tympanic membrane function and not to variations in cochlear impedance.

  10. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  11. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  12. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  13. Laser vibrometry measurements of vibration and sound fields of a bowed violin

    Science.gov (United States)

    Gren, Per; Tatar, Kourosh; Granström, Jan; Molin, N.-E.; Jansson, Erik V.

    2006-04-01

    Laser vibrometry measurements on a bowed violin are performed. A rotating disc apparatus, acting as a violin bow, is developed. It produces a continuous, long, repeatable, multi-frequency sound from the instrument that imitates the real bow-string interaction for a 'very long bow'. What mainly differs is that the back and forward motion of the real bow is replaced by the rotating motion with constant velocity of the disc and constant bowing force (bowing pressure). This procedure is repeatable. It is long lasting and allows laser vibrometry techniques to be used, which measure forced vibrations by bowing at all excited frequencies simultaneously. A chain of interacting parts of the played violin is studied: the string, the bridge and the plates as well as the emitted sound field. A description of the mechanics and the sound production of the bowed violin is given, i.e. the production chain from the bowed string to the produced tone.

  14. Global existence and decay property of the Timoshenko system in thermoelasticity with second sound

    OpenAIRE

    Racke, Reinhard; Said-Hourari, Belkacem

    2012-01-01

    Our main focus in the present paper is to study the asymptotic behavior of a nonlinear version of the Timoshenko system in thermoelasticity with second sound. As it has been already proved in \\cite{SaidKasi_2011}, the linear version of this system is of regularity-loss type. It is well known (\\cite{HKa06}, \\cite%b{IK08}, \\cite{KK09}) that the regularity-loss property of the linear problem creates difficulties when dealing with the nonlinear problem. In fact, the dissipative property of the pr...

  15. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  16. Soundness confirmation through cold test of the system equipment of HTTR

    International Nuclear Information System (INIS)

    Ono, Masato; Shinohara, Masanori; Iigaki, Kazuhiko; Tochio, Daisuke; Nakagawa, Shigeaki; Shimazaki, Yosuke

    2014-01-01

    HTTR was established at the Oarai Research and Development Center of Japan Atomic Energy Agency, for the purpose of the establishment and upgrading of high-temperature gas-cooled reactor technology infrastructure. Currently, it performs a safety demonstration test in order to demonstrate the safety inherent in high-temperature gas-cooled reactor. After the Great East Japan Earthquake, it conducted confirmation test for the purpose of soundness survey of facilities and equipment, and it confirmed that the soundness of the equipment was maintained. After two years from the confirmation test, it has not been confirmed whether the function of dynamic equipment and the soundness such as the airtightness of pipes and containers are maintained after receiving the influence of damage or deterioration caused by aftershocks generated during two years or aging. To confirm the soundness of these facilities, operation under cold state was conducted, and the obtained plant data was compared with confirmation test data to evaluate the presence of abnormality. In addition, in order to confirm through cold test the damage due to aftershocks and degradation due to aging, the plant data to compare was supposed to be the confirmation test data, and the evaluation on abnormality of the plant data of machine starting time and normal operation data was performed. (A.O.)

  17. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  18. Evaluation of 3D Positioned Sound in Multimodal Scenarios

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard

    present but interacts with the other meeting members using different virtual reality technologies. The thesis also dealt with a 3D sound system in trucks. it was investigated if 3D-sound could be used to give the truck driver an audible and lifelike experience of the cyclists’ position, in relation......This Ph.D. study has dealt with different binaural methods for implementing 3D sound in selected multimodal applications, with the purpose of evaluating the feasibility of using 3D sound in these applications. The thesis dealt with a teleconference application in which one person is not physically...

  19. The generation of sound by vorticity waves in swirling duct flows

    Science.gov (United States)

    Howe, M. S.; Liu, J. T. C.

    1977-01-01

    Swirling flow in an axisymmetric duct can support vorticity waves propagating parallel to the axis of the duct. When the cross-sectional area of the duct changes a portion of the wave energy is scattered into secondary vorticity and sound waves. Thus the swirling flow in the jet pipe of an aeroengine provides a mechanism whereby disturbances produced by unsteady combustion or turbine blading can be propagated along the pipe and subsequently scattered into aerodynamic sound. In this paper a linearized model of this process is examined for low Mach number swirling flow in a duct of infinite extent. It is shown that the amplitude of the scattered acoustic pressure waves is proportional to the product of the characteristic swirl velocity and the perturbation velocity of the vorticity wave. The sound produced in this way may therefore be of more significance than that generated by vorticity fluctuations in the absence of swirl, for which the acoustic pressure is proportional to the square of the perturbation velocity. The results of the analysis are discussed in relation to the problem of excess jet noise.

  20. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    Science.gov (United States)

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  1. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  2. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  3. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  4. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  5. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  6. The NASA Sounding Rocket Program and space sciences

    Science.gov (United States)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  7. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  8. Sound pressure level tools design used in occupational health by means of Labview software

    Directory of Open Access Journals (Sweden)

    Farhad Forouharmajd

    2015-01-01

    Conclusion: LabVIEW programming capabilities in the field of sound can be referred to the measurement of sound, frequency analysis, and sound control that actually the software acts like a sound level meter and sound analyzer. According to the mentioned features, we can use this software to analyze and process sound and vibration as a monitoring system.

  9. A new system for rating impact sound insulation

    NARCIS (Netherlands)

    Gerretsen, E.

    1976-01-01

    The rating of impact sound insulation on the basis of tapping machine measurements with the ISO reference values has proved to be unsatisfactory in practice. This is mainly due to the differences in spectrum shape of tapping machine noise and real life impact noises, such as walking. The problem can

  10. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  11. Discrimination of musical instrument sounds resynthesized with simplified spectrotemporal parameters.

    Science.gov (United States)

    McAdams, S; Beauchamp, J W; Meneguzzi, S

    1999-02-01

    The perceptual salience of several outstanding features of quasiharmonic, time-variant spectra was investigated in musical instrument sounds. Spectral analyses of sounds from seven musical instruments (clarinet, flute, oboe, trumpet, violin, harpsichord, and marimba) produced time-varying harmonic amplitude and frequency data. Six basic data simplifications and five combinations of them were applied to the reference tones: amplitude-variation smoothing, coherent variation of amplitudes over time, spectral-envelope smoothing, forced harmonic-frequency variation, frequency-variation smoothing, and harmonic-frequency flattening. Listeners were asked to discriminate sounds resynthesized with simplified data from reference sounds resynthesized with the full data. Averaged over the seven instruments, the discrimination was very good for spectral envelope smoothing and amplitude envelope coherence, but was moderate to poor in decreasing order for forced harmonic frequency variation, frequency variation smoothing, frequency flattening, and amplitude variation smoothing. Discrimination of combinations of simplifications was equivalent to that of the most potent constituent simplification. Objective measurements were made on the spectral data for harmonic amplitude, harmonic frequency, and spectral centroid changes resulting from simplifications. These measures were found to correlate well with discrimination results, indicating that listeners have access to a relatively fine-grained sensory representation of musical instrument sounds.

  12. Time and frequency weightings and the assessment of sound exposure

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; de Toro, Miguel Angel Aranda; Hammershøi, Dorte

    2010-01-01

    Since the development of averaging/integrating sound level meters and frequency weighting networks in the 1950’s, measurement of the physical characteristics of sound has not changed a great deal. Advances have occurred in how the measured values are used (day-night averages, limit and action...... of the exposure. This information is being used to investigate metrics that can differentiate temporal characteristics (impulsive, fluctuating) as well as frequency characteristics (narrow-band or tonal dominance) of sound exposures. This presentation gives an overview of the existing sound measurement...... and analysis methods, that can provide a better representation of the effects of sound exposures on the hearing system...

  13. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  14. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  15. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  16. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  17. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  18. Proximity sounding analysis for derechos and supercells: an assessment of similarities and differences

    Science.gov (United States)

    Doswell, Charles A.; Evans, Jeffry S.

    Proximity soundings (within 2 h and 167 km) of derechos (long-lived, widespread damaging convective windstorms) and supercells have been obtained. More than 65 derechos, accompanied by 115 proximity soundings, are identified during the years 1983 to 1993. The derechos have been divided into categories according to the synoptic situation: strong forcing (SF), weak forcing (WF), and "hybrid" cases (which are neither weakly nor strongly forced). Nearly 100 supercell proximity soundings have been found for the period 1998 to 2001, subdivided into nontornadic and tornadic supercells; tornadic supercells were further subdivided into those producing significant (>F1 rating) tornadoes and weak tornadoes (F0-F1 rating). WF derecho situations typically are characterized by warm, moist soundings with large convective available potential instability (CAPE) and relatively weak vertical wind shear. SF derechos usually have stronger wind shears, and cooler and less moist soundings with lower CAPE than the weakly forced cases. Most derechos exhibit strong storm-relative inflow at low levels. In WF derechos, this is usually the result of rapid convective system movement, whereas in SF derechos, storm-relative inflow at low levels is heavily influenced by relatively strong low-level windspeeds. "Hybrid" cases collectively are similar to an average of the SF and WF cases. Supercells occur in environments that are not all that dissimilar from those that produce SF derechos. It appears that some parameter combining instability and deep layer shear, such as the Energy-Helicity Index (EHI), can help discriminate between tornadic and nontornadic supercell situations. Soundings with significant tornadoes (F2 and greater) typically show high 0-1 km relative humidities, and strong 0-1 km shear. Results suggest it may not be easy to forecast the mode of severe thunderstorm activity (i.e., derecho versus supercell) on any particular day, given conditions that favor severe thunderstorm activity

  19. How Should Children with Speech Sound Disorders be Classified? A Review and Critical Evaluation of Current Classification Systems

    Science.gov (United States)

    Waring, R.; Knight, R.

    2013-01-01

    Background: Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of…

  20. Steerable sound transport in a 3D acoustic network

    Science.gov (United States)

    Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie

    2017-10-01

    Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.

  1. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  2. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    Science.gov (United States)

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  3. Methods and systems for producing syngas

    Science.gov (United States)

    Hawkes, Grant L; O& #x27; Brien, James E; Stoots, Carl M; Herring, J. Stephen; McKellar, Michael G; Wood, Richard A; Carrington, Robert A; Boardman, Richard D

    2013-02-05

    Methods and systems are provided for producing syngas utilizing heat from thermochemical conversion of a carbonaceous fuel to support decomposition of at least one of water and carbon dioxide using one or more solid-oxide electrolysis cells. Simultaneous decomposition of carbon dioxide and water or steam by one or more solid-oxide electrolysis cells may be employed to produce hydrogen and carbon monoxide. A portion of oxygen produced from at least one of water and carbon dioxide using one or more solid-oxide electrolysis cells is fed at a controlled flow rate in a gasifier or combustor to oxidize the carbonaceous fuel to control the carbon dioxide to carbon monoxide ratio produced.

  4. An investigation of noise produced by unsteady gas flow through silencer elements

    Science.gov (United States)

    Mawhinney, Graeme Hugh

    This thesis presents an investigation of the noise produced by unsteady gas flow through silencer elements. The central aim of the research project was to produce a tool for assistance in the design of the exhaust systems of diesel powered electrical generator sets, with the modelling techniques developed having a much wider application in reciprocating internal combustion engine exhaust systems. An automotive cylinder head was incorporated in a purpose built test rig to supply exhaust pulses, typical of those found in the exhaust system of four stroke diesel engines, to various experimental exhaust systems. Exhaust silencer elements evaluated included expansion, re- entrant, concentric tube resonator and absorptive elements. Measurements taken on the test rig included, unsteady superposition pressure in the exhaust ducting, cyclically averaged mass flow rate through the system and exhaust noise levels radiated into a semi-anechoic measurement chamber. The entire test rig was modelled using the 1D finite volume method developed previously developed at Queen's University Belfast. Various boundary conditions, developed over the years, were used to model the various silencer elements being evaluated. The 1D gas dynamic simulation thus estimated the mass flux history at the open end of the exhaust system. The mass flux history was then broken into its harmonic components and an acoustic radiation model was developed to model the sound pressure level produced by an acoustic monopole over a reflecting plane. The accuracy of the simulation technique was evaluated by correlation of measured and simulated superposition pressure and noise data. In general correlation of superposition pressure was excellent for all of the silencer elements tested. Predicted sound pressure level radiated from the open end of the exhaust tailpipe was seen to be accurate in the 100 Hz to 1 kHz frequency range for all of the silencer elements tested.

  5. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  6. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  7. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  8. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  9. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  10. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  11. WAVE: Interactive Wave-based Sound Propagation for Virtual Environments.

    Science.gov (United States)

    Mehra, Ravish; Rungta, Atul; Golas, Abhinav; Ming Lin; Manocha, Dinesh

    2015-04-01

    We present an interactive wave-based sound propagation system that generates accurate, realistic sound in virtual environments for dynamic (moving) sources and listeners. We propose a novel algorithm to accurately solve the wave equation for dynamic sources and listeners using a combination of precomputation techniques and GPU-based runtime evaluation. Our system can handle large environments typically used in VR applications, compute spatial sound corresponding to listener's motion (including head tracking) and handle both omnidirectional and directional sources, all at interactive rates. As compared to prior wave-based techniques applied to large scenes with moving sources, we observe significant improvement in runtime memory. The overall sound-propagation and rendering system has been integrated with the Half-Life 2 game engine, Oculus-Rift head-mounted display, and the Xbox game controller to enable users to experience high-quality acoustic effects (e.g., amplification, diffraction low-passing, high-order scattering) and spatial audio, based on their interactions in the VR application. We provide the results of preliminary user evaluations, conducted to study the impact of wave-based acoustic effects and spatial audio on users' navigation performance in virtual environments.

  12. Development of sound absorption measuring system with acoustic chamber; Kogata kyuon koka sokutei sochi no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Takahira, M.; Noba, M. [Toyota Motor Corp., Aichi (Japan); Matsuoka, H. [Nippon Soken, Inc., Tokyo (Japan)

    1998-05-01

    In order to measure sound absorption performance necessary to develop sound absorption materials, development was made on a device consisting of a small sound box capable of measurement inexpensively and easily, as a measure against the reverberation chamber method. In order to obtain stabilized diffusion sound internally, the sound box has a shape of asymmetric seven-side body in which sides do not face squarely with each other. The box was so sized that a large number of resonant vibration postures can be constituted at the targeted frequency simultaneously in the box. The box has a commercially available cone speaker with good acoustic output characteristics in frequency range of higher than 500 Hz installed on an inner side of the box. The sound source uses a method to derive sound absorption rate from difference of sound pressure levels. In order to eliminate need of averaging treatment by using a multi-point measurement inside the box, a discussion was given to provide an opening on part of the box to place the sound receiving point outside the opening. A square test piece is placed on the floor 0.5 meter or more away from the speaker in the box. As a result of the experiment, it was verified that the sound absorption rate obtained by this device corresponds well with that by the reverberation chamber method. The size of the test piece was also found adequate. 2 refs., 11 figs., 1 tab.

  13. Inverse problem of radiofrequency sounding of ionosphere

    Science.gov (United States)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  14. Listen to the band! How sound can realize group identity and enact intergroup domination.

    Science.gov (United States)

    Shayegh, John; Drury, John; Stevenson, Clifford

    2017-03-01

    Recent research suggests that sound appraisal can be moderated by social identity. We validate this finding, and also extend it, by examining the extent to which sound can also be understood as instrumental in intergroup relations. We interviewed nine members of a Catholic enclave in predominantly Protestant East Belfast about their experiences of an outgroup (Orange Order) parade, where intrusive sound was a feature. Participants reported experiencing the sounds as a manifestation of the Orange Order identity and said that it made them feel threatened and anxious because they felt it was targeted at them by the outgroup (e.g., through aggressive volume increases). There was also evidence that the sounds produced community disempowerment, which interviewees explicitly linked to the invasiveness of the music. Some interviewees described organizing to collectively 'drown out' the bands' sounds, an activity which appeared to be uplifting. These findings develop the elaborated social identity model of empowerment, by showing that intergroup struggle and collective self-objectification can operate through sound as well as through physical actions. © 2016 The British Psychological Society.

  15. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A): Instrumentation interface control document

    Science.gov (United States)

    1994-01-01

    This Interface Control Document (ICD) defines the specific details of the complete accomodation information between the Earth Observing System (EOS) PM Spacecraft and the Advanced Microwave Sounding Unit (AMSU-A)Instrument. This is the first submittal of the ICN: it will be updated periodically throughout the life of the program. The next update is planned prior to Critical Design Review (CDR).

  16. Evolution of non-speech sound memory in postlingual deafness: implications for cochlear implant rehabilitation.

    Science.gov (United States)

    Lazard, D S; Giraud, A L; Truy, E; Lee, H J

    2011-07-01

    Neurofunctional patterns assessed before or after cochlear implantation (CI) are informative markers of implantation outcome. Because phonological memory reorganization in post-lingual deafness is predictive of the outcome, we investigated, using a cross-sectional approach, whether memory of non-speech sounds (NSS) produced by animals or objects (i.e. non-human sounds) is also reorganized, and how this relates to speech perception after CI. We used an fMRI auditory imagery task in which sounds were evoked by pictures of noisy items for post-lingual deaf candidates for CI and for normal-hearing subjects. When deaf subjects imagined sounds, the left inferior frontal gyrus, the right posterior temporal gyrus and the right amygdala were less activated compared to controls. Activity levels in these regions decreased with duration of auditory deprivation, indicating declining NSS representations. Whole brain correlations with duration of auditory deprivation and with speech scores after CI showed an activity decline in dorsal, fronto-parietal, cortical regions, and an activity increase in ventral cortical regions, the right anterior temporal pole and the hippocampal gyrus. Both dorsal and ventral reorganizations predicted poor speech perception outcome after CI. These results suggest that post-CI speech perception relies, at least partially, on the integrity of a neural system used for processing NSS that is based on audio-visual and articulatory mapping processes. When this neural system is reorganized, post-lingual deaf subjects resort to inefficient semantic- and memory-based strategies. These results complement those of other studies on speech processing, suggesting that both speech and NSS representations need to be maintained during deafness to ensure the success of CI. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  18. Device for precision measurement of speed of sound in a gas

    Science.gov (United States)

    Kelner, Eric; Minachi, Ali; Owen, Thomas E.; Burzynski, Jr., Marion; Petullo, Steven P.

    2004-11-30

    A sensor for measuring the speed of sound in a gas. The sensor has a helical coil, through which the gas flows before entering an inner chamber. Flow through the coil brings the gas into thermal equilibrium with the test chamber body. After the gas enters the chamber, a transducer produces an ultrasonic pulse, which is reflected from each of two faces of a target. The time difference between the two reflected signals is used to determine the speed of sound in the gas.

  19. Reviews Equipment: Chameleon Nano Flakes Book: Requiem for a Species Equipment: Laser Sound System Equipment: EasySense VISION Equipment: UV Flash Kit Book: The Demon-Haunted World Book: Nonsense on Stilts Book: How to Think about Weird Things Web Watch

    Science.gov (United States)

    2011-03-01

    WE RECOMMEND Requiem for a Species This book delivers a sober message about climate change Laser Sound System Sound kit is useful for laser demonstrations EasySense VISION Data Harvest produces another easy-to-use data logger UV Flash Kit Useful equipment captures shadows on film The Demon-Haunted World World-famous astronomer attacks pseudoscience in this book Nonsense on Stilts A thought-provoking analysis of hard and soft sciences How to Think about Weird Things This book explores the credibility of astrologers and their ilk WORTH A LOOK Chameleon Nano Flakes Product lacks good instructions and guidelines WEB WATCH Amateur scientists help out researchers with a variety of online projects

  20. Prevalence of high frequency hearing loss consistent with noise exposure among people working with sound systems and general population in Brazil: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Trevisani Virgínia FM

    2008-05-01

    Full Text Available Abstract Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians. A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although

  1. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  2. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  3. The Modulated Sounds Made by the Tsetse Fly Glossina Brevipalpis ...

    African Journals Online (AJOL)

    The modulated sounds made by Glossina brevipalpis are physiologically and refiexly induced phenomena, produced by muscular vibrations in the pterothorax. The patterns and physical nature of the calls and songs were investigated acoustically, spectrographically and oscilloscopically to explore the possibility of a ...

  4. DISCO: An object-oriented system for music composition and sound design

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.; Tipei, S.; Wright, J. M.

    2000-09-05

    This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digital Instrument for Sonification and Composition) system is an open-ended work in progress.

  5. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    as knowledge based on reflexive practices. I chose ‘health promotion’ as the field for my research as it utilises knowledge produced in several research disciplines, among these both quantitative and qualitative. I mapped out the institutions, actors, events, and documents that constituted the field of health...... of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... result of a rigorous and standardized research method. However, this anthropological analysis shows that evidence and evidence-based is a hegemonic ‘way of knowing’ that sometimes transposes everyday reasoning into an epistemological form. However, the empirical material shows a variety of understandings...

  6. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  7. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  8. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  9. Speed of sound in hadronic matter using non-extensive Tsallis statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Cleymans, Jean

    2016-01-01

    The speed of sound (c_s) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark-gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first-order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different q-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in the non-extensive case for c"2_s by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly show that the temperature where the mass cut-off starts varies with the q-parameter which hints at a relation between the degree of non-equilibrium and the limiting temperature of the system. It is shown that for values of q above approximately 1.13 all criticality disappears in the speed of sound, i.e. the decrease in the value of the speed of sound, observed at lower values of q, disappears completely. (orig.)

  10. Speed of sound in hadronic matter using non-extensive Tsallis statistics

    Energy Technology Data Exchange (ETDEWEB)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol, M.P. (India); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-09-15

    The speed of sound (c{sub s}) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark-gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first-order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different q-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in the non-extensive case for c{sup 2}{sub s} by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly show that the temperature where the mass cut-off starts varies with the q-parameter which hints at a relation between the degree of non-equilibrium and the limiting temperature of the system. It is shown that for values of q above approximately 1.13 all criticality disappears in the speed of sound, i.e. the decrease in the value of the speed of sound, observed at lower values of q, disappears completely. (orig.)

  11. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  12. Microscopic theory of longitudinal sound velocity in charge ordered manganites

    International Nuclear Information System (INIS)

    Rout, G C; Panda, S

    2009-01-01

    A microscopic theory of longitudinal sound velocity in a manganite system is reported here. The manganite system is described by a model Hamiltonian consisting of charge density wave (CDW) interaction in the e g band, an exchange interaction between spins of the itinerant e g band electrons and the core t 2g electrons, and the Heisenberg interaction of the core level spins. The magnetization and the CDW order parameters are considered within mean-field approximations. The phonon Green's function was calculated by Zubarev's technique and hence the longitudinal velocity of sound was finally calculated for the manganite system. The results show that the elastic spring involved in the velocity of sound exhibits strong stiffening in the CDW phase with a decrease in temperature as observed in experiments.

  13. Microscopic theory of longitudinal sound velocity in charge ordered manganites

    Energy Technology Data Exchange (ETDEWEB)

    Rout, G C [Condensed Matter Physics Group, PG Department of Applied Physics and Ballistics, FM University, Balasore 756 019 (India); Panda, S, E-mail: gcr@iopb.res.i [Trident Academy of Technology, F2/A, Chandaka Industrial Estate, Bhubaneswar 751 024 (India)

    2009-10-14

    A microscopic theory of longitudinal sound velocity in a manganite system is reported here. The manganite system is described by a model Hamiltonian consisting of charge density wave (CDW) interaction in the e{sub g} band, an exchange interaction between spins of the itinerant e{sub g} band electrons and the core t{sub 2g} electrons, and the Heisenberg interaction of the core level spins. The magnetization and the CDW order parameters are considered within mean-field approximations. The phonon Green's function was calculated by Zubarev's technique and hence the longitudinal velocity of sound was finally calculated for the manganite system. The results show that the elastic spring involved in the velocity of sound exhibits strong stiffening in the CDW phase with a decrease in temperature as observed in experiments.

  14. Physical processes in a coupled bay-estuary coastal system: Whitsand Bay and Plymouth Sound

    Science.gov (United States)

    Uncles, R. J.; Stephens, J. A.; Harris, C.

    2015-09-01

    Whitsand Bay and Plymouth Sound are located in the southwest of England. The Bay and Sound are separated by the ∼2-3 km-wide Rame Peninsula and connected by ∼10-20 m-deep English Channel waters. Results are presented from measurements of waves and currents, drogue tracking, surveys of salinity, temperature and turbidity during stratified and unstratified conditions, and bed sediment surveys. 2D and 3D hydrodynamic models are used to explore the generation of tidally- and wind-driven residual currents, flow separation and the formation of the Rame eddy, and the coupling between the Bay and the Sound. Tidal currents flow around the Rame Peninsula from the Sound to the Bay between approximately 3 h before to 2 h after low water and form a transport path between them that conveys lower salinity, higher turbidity waters from the Sound to the Bay. These waters are then transported into the Bay as part of the Bay-mouth limb of the Rame eddy and subsequently conveyed to the near-shore, east-going limb and re-circulated back towards Rame Head. The Simpson-Hunter stratification parameter indicates that much of the Sound and Bay are likely to stratify thermally during summer months. Temperature stratification in both is pronounced during summer and is largely determined by coastal, deeper-water stratification offshore. Small tidal stresses in the Bay are unable to move bed sediment of the observed sizes. However, the Bay and Sound are subjected to large waves that are capable of driving a substantial bed-load sediment transport. Measurements show relatively low levels of turbidity, but these respond rapidly to, and have a strong correlation with, wave height.

  15. Analysis of sound absorption performance of an electroacoustic absorber using a vented enclosure

    Science.gov (United States)

    Cho, Youngeun; Wang, Semyung; Hyun, Jaeyub; Oh, Seungjae; Goo, Seongyeol

    2018-03-01

    The sound absorption performance of an electroacoustic absorber (EA) is primarily influenced by the dynamic characteristics of the loudspeaker that acts as the actuator of the EA system. Therefore, the sound absorption performance of the EA is maximum at the resonance frequency of the loudspeaker and tends to degrade in the low-frequency and high-frequency bands based on this resonance frequency. In this study, to adjust the sound absorption performance of the EA system in the low-frequency band of approximately 20-80 Hz, an EA system using a vented enclosure that has previously been used to enhance the radiating sound pressure of a loudspeaker in the low-frequency band, is proposed. To verify the usefulness of the proposed system, two acoustic environments are considered. In the first acoustic environment, the vent of the vented enclosure is connected to an external sound field that is distinct from the sound field coupled to the EA. In this case, the acoustic effect of the vented enclosure on the performance of the EA is analyzed through an analytical approach using dynamic equations and an impedance-based equivalent circuit. Then, it is verified through numerical and experimental approaches. Next, in the second acoustic environment, the vent is connected to the same external sound field as the EA. In this case, the effect of the vented enclosure on the EA is investigated through an analytical approach and finally verified through a numerical approach. As a result, it is confirmed that the characteristics of the sound absorption performances of the proposed EA system using the vented enclosure in the two acoustic environments considered in this study are different from each other in the low-frequency band of approximately 20-80 Hz. Furthermore, several case studies on the change tendency of the performance of the EA using the vented enclosure according to the critical design factors or vent number for the vented enclosure are also investigated. In the future

  16. Characterization of Underwater Sounds Produced by a Hydraulic Cutterhead Dredge during Maintenance Dredging in the Stockton Deepwater Shipping Channel, California

    Science.gov (United States)

    2014-03-01

    underwater sound had not been linked to dredging projects. However, concerns for negative impacts of underwater noise on aquatic species (e.g. salmon ... METHODS Study site. The Port of Stockton is a major inland deepwater port in Stockton, California, located on the San Joaquin River before it joins... of Cook Inlet, Alaska. The authors reported that ambient sound levels ranged from 95 dB in the Knik Arm to 124 dB near Point Possession on an incoming

  17. Cardiovascular Sound and the Stethoscope, 1816 to 2016

    Science.gov (United States)

    Segall, Harold N.

    1963-01-01

    Cardiovascular sound escaped attention until Laennec invented and demonstrated the usefulness of the stethoscope. Accuracy of diagnosis using cardiovascular sounds as clues increased with improvement in knowledge of the physiology of circulation. Nearly all currently acceptable clinicopathological correlations were established by physicians who used the simplest of stethoscopes or listened with the bare ear. Certain refinements followed the use of modern methods which afford greater precision in timing cardiovascular sounds. These methods contribute to educating the human ear, so that those advantages may be applied which accrue from auscultation, plus the method of writing quantitative symbols to describe what is heard, by focusing the sense of hearing on each segment of the cardiac cycle in turn. By the year 2016, electronic systems of collecting and analyzing data about the cardiovascular system may render the stethoscope obsolete. ImagesFig. 1Fig. 2Fig. 3Fig. 5Fig. 8 PMID:13987676

  18. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  19. 2D interpretation of vertical electrical soundings: application to the Sarantaporon basin (Thessaly, Greece)

    International Nuclear Information System (INIS)

    Atzemoglou, A; Tsourlos, P

    2012-01-01

    A large-scale vertical electrical sounding (VES) survey was applied at the basin of Sarantaporon, Elassona in order to study the tectonic and hydrogeological setting of the area. A large number of VES was obtained on a near-regular grid and data were initially processed with 1D inversion algorithm. Since some of the dense measured soundings were collinear, it was possible to combine 1D sounding data and produce 2D data sets which were interpreted using a fully 2D inversion algorithm. 2D geoelectrical models were in very good agreement with the existing drilling information of the area. 2D interpretation results were combined to produce pseudo-3D geoelectrical images of the subsurface. Resulting geoelectrical interpretations are in very good agreement with the existing geological information and reveal a relatively detailed picture of the basin's lithology. Further, the results allowed us to obtain new, and verify existing, structural information regarding the studied area. Overall, it is concluded that 2D interpretation of 1D VES measurements can produce improved subsurface geophysical images and presents a potential useful tool for larger scale geological investigations especially in the case of reprocessing existing VES data sets

  20. Sound waves in hadronic matter

    Science.gov (United States)

    Wilk, Grzegorz; Włodarczyk, Zbigniew

    2018-01-01

    We argue that recent high energy CERN LHC experiments on transverse momenta distributions of produced particles provide us new, so far unnoticed and not fully appreciated, information on the underlying production processes. To this end we concentrate on the small (but persistent) log-periodic oscillations decorating the observed pT spectra and visible in the measured ratios R = σdata(pT) / σfit (pT). Because such spectra are described by quasi-power-like formulas characterised by two parameters: the power index n and scale parameter T (usually identified with temperature T), the observed logperiodic behaviour of the ratios R can originate either from suitable modifications of n or T (or both, but such a possibility is not discussed). In the first case n becomes a complex number and this can be related to scale invariance in the system, in the second the scale parameter T exhibits itself log-periodic oscillations which can be interpreted as the presence of some kind of sound waves forming in the collision system during the collision process, the wave number of which has a so-called self similar solution of the second kind. Because the first case was already widely discussed we concentrate on the second one and on its possible experimental consequences.

  1. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  2. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  3. Self-mixing laser Doppler vibrometry with high optical sensitivity application to real-time sound reproduction

    CERN Document Server

    Abe, K; Ko, J Y

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 mu Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  4. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    International Nuclear Information System (INIS)

    Abe, Kazutaka; Otsuka, Kenju; Ko, Jing-Yuan

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 μPa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio

  5. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Kazutaka [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Otsuka, Kenju [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Ko, Jing-Yuan [Department of Physics, Tunghai University, 181 Taichung-kang Road, Section 3, Taichung 407, Taiwan (China)

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 {mu}Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  6. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  7. Operator performance and annunciation sounds

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.T.; Artiss, W.G.

    1997-01-01

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author)

  8. Sonar sound groups and increased terminal buzz duration reflect task complexity in hunting bats

    DEFF Research Database (Denmark)

    Hulgard, K.; Ratcliffe, J. M.

    2016-01-01

    to prey under presumably more difficult conditions. Specifically, we found Daubenton's bats, Myotis daubentonii, produced longer buzzes when aerial-hawking versus water-trawling prey, but that bats taking revolving air- and water-borne prey produced more sonar sound groups than did the bats when taking...

  9. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  10. Decreased sound tolerance: hyperacusis, misophonia, diplacousis, and polyacousis.

    Science.gov (United States)

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2015-01-01

    Definitions, potential mechanisms, and treatments for decreased sound tolerance, hyperacusis, misophonia, and diplacousis are presented with an emphasis on the associated physiologic and neurophysiological processes and principles. A distinction is made between subjects who experience these conditions versus patients who suffer from them. The role of the limbic and autonomic nervous systems and other brain systems involved in cases of bothersome decreased sound tolerance is stressed. The neurophysiological model of tinnitus is outlined with respect to how it may contribute to our understanding of these phenomena and their treatment. © 2015 Elsevier B.V. All rights reserved.

  11. Sound and recording applications and theory

    CERN Document Server

    Rumsey, Francis

    2014-01-01

    Providing vital reading for audio students and trainee engineers, this guide is ideal for anyone who wants a solid grounding in both theory and industry practices in audio, sound and recording. There are many books on the market covering ""how to work it"" when it comes to audio equipment-but Sound and Recording isn't one of them. Instead, you'll gain an understanding of ""how it works"" with this approachable guide to audio systems.New to this edition:Digital audio section revised substantially to include the latest developments in audio networking (e.g. RAVENNA, AES X-192, AVB), high-resolut

  12. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  13. The Flooding of Long Island Sound

    Science.gov (United States)

    Thomas, E.; Varekamp, J. C.; Lewis, R. S.

    2007-12-01

    Between the Last Glacial Maximum (22-19 ka) and the Holocene (10 ka) regions marginal to the Laurentide Ice Sheets saw complex environmental changes from moraines to lake basins to dry land to estuaries and marginal ocean basins, as a result of the interplay between the topography of moraines formed at the maximum extent and during stages of the retreat of the ice sheet, regional glacial rebound, and global eustatic sea level rise. In New England, the history of deglaciation and relative sea level rise has been studied extensively, and the sequence of events has been documented in detail. The Laurentide Ice Sheet reached its maximum extent (Long Island) at 21.3-20.4 ka according to radiocarbon dating (calibrated ages), 19.0-18.4 ka according to radionuclide dating. Periglacial Lake Connecticut formed behind the moraines in what is now the Long Island Sound Basin. The lake drained through the moraine at its eastern end. Seismic records show that a fluvial system was cut into the exposed lake beds, and a wave-cut unconformity was produced during the marine flooding, which has been inferred to have occurred at about 15.5 ka (Melt Water Pulse 1A) through correlation with dated events on land. Vibracores from eastern Long Island Sound penetrate the unconformity and contain red, varved lake beds overlain by marine grey sands and silts with a dense concentration of oysters in life position above the erosional contact. The marine sediments consist of intertidal to shallow subtidal deposits with oysters, shallow-water foraminifera and litoral diatoms, overlain by somewhat laminated sandy silts, in turn overlain by coarser-grained, sandy to silty sediments with reworked foraminifera and bivalve fragments. The latter may have been deposited in a sand-wave environment as present today at the core locations. We provide direct age control of the transgression with 30 radiocarbon dates on oysters, and compared the ages with those obtained on macrophytes and bulk organic carbon in

  14. Design and evaluation of a higher-order spherical microphone/ambisonic sound reproduction system for the acoustical assessment of concert halls

    Science.gov (United States)

    Clapp, Samuel W.

    Previous studies of the perception of concert hall acoustics have generally employed two methods for soliciting listeners' judgments. One method is to have listeners rate the sound in a hall while physically present in that hall. The other method is to make recordings of different halls and seat positions, and then recreate the environment for listeners in a laboratory setting via loudspeakers or headphones. In situ evaluations offer a completely faithful rendering of all aspects of the concert hall experience. However, many variables cannot be controlled and the short duration of auditory memory precludes an objective comparison of different spaces. Simulation studies allow for more control over various aspects of the evaluations, as well as A/B comparisons of different halls and seat positions. The drawback is that all simulation methods suffer from limitations in the accuracy of reproduction. If the accuracy of the simulation system is improved, then the advantages of the simulation method can be retained, while mitigating its disadvantages. Spherical microphone array technology has received growing interest in the acoustics community in recent years for many applications including beamforming, source localization, and other forms of three-dimensional sound field analysis. These arrays can decompose a measured sound field into its spherical harmonic components, the spherical harmonics being a set of spatial basis functions on the sphere that are derived from solving the wave equation in spherical coordinates. Ambisonics is a system for two- and three-dimensional spatialized sound that is based on recreating a sound field from its spherical harmonic components. Because of these shared mathematical underpinnings, ambisonics provides a natural way to present fully spatialized renderings of recordings made with a spherical microphone array. Many of the previously studied applications of spherical microphone arrays have used a narrow frequency range where the array

  15. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  16. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  17. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  18. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  19. Coupled simulation of meteorological parameters and sound intensity in a narrow valley

    Energy Technology Data Exchange (ETDEWEB)

    Heimann, D. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Gross, G. [Hannover Univ. (Germany). Inst. fuer Meteorologie und Klimatologie

    1997-07-01

    A meteorological mesoscale model is used to simulate the inhomogeneous distribution of temperature and the appertaining development of thermal wind systems in a narrow two-dimensional valley during the course of a cloud-free day. A simple sound particle model takes up the simulated meteorological fields and calculates the propagation of noise which originates from a line source at one of the slopes of this valley. The coupled modeling system ensures consistency of topography, meteorological parameters and the sound field. The temporal behaviour of the sound intensity level across the valley is examined. It is only governed by the time-dependent meteorology. The results show remarkable variations of the sound intensity during the course of a day depending on the location in the valley. (orig.) 23 refs.

  20. Evaluation of Routine Atmospheric Sounding Measurements Using Unmanned Systems (ERASMUS) Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    de Boer, Gijs [Univ. of Colorado, Boulder, CO (United States). Cooperative Inst. for Research in Environmental Sciences (CIRES); Lawrence, Dale [Univ. of Colorado, Boulder, CO (United States); Palo, Scott [Univ. of Colorado, Boulder, CO (United States); Argrow, Brian [Univ. of Colorado, Boulder, CO (United States); LoDolce, Gabriel [Univ. of Colorado, Boulder, CO (United States); Curry, Nathan [Univ. of Colorado, Boulder, CO (United States); Weibel, Douglas [Univ. of Colorado, Boulder, CO (United States); Finnamore, W [Univ. of Colorado, Boulder, CO (United States); D' Amore, P [Univ. of Colorado, Boulder, CO (United States); Borenstein, Steven [Univ. of Colorado, Boulder, CO (United States); Nichols, Tevis [Univ. of Colorado, Boulder, CO (United States); Elston, Jack [Blackswift Technologies, Boulder, CO (United States); Ivey, Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bendure, Al [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schmid, Beat [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Long, Chuck [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Telg, Hagen [Univ. of Colorado, Boulder, CO (United States). Cooperative Inst. for Research in Environmental Sciences (CIRES); Gao, Rushan [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Hock, T [National Center for Atmospheric Research, Boulder, CO (United States); Bland, Geoff [National Aeronautics and Space Administration (NASA), Washington, DC (United States)

    2017-03-01

    The Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems (ERASMUS) campaign was proposed with two central goals; to obtain scientifically relevant measurements of quantities related to clouds, aerosols, and radiation, including profiles of temperature, humidity, and aerosol particles, the structure of the arctic atmosphere during transitions between clear and cloudy states, measurements that would allow us to evaluate the performance of retrievals from U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility remote sensors in the Arctic atmosphere, and information on the spatial variability of heat and moisture fluxes from the arctic surface; and to demonstrate unmanned aerial system (UAS) capabilities in obtaining measurements relevant to the ARM and ASR programs, particularly for improving our understanding of Arctic clouds and aerosols.

  1. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  2. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  3. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  4. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.

    Science.gov (United States)

    Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.

  5. Analysis of Respiratory Sounds: State of the Art

    Directory of Open Access Journals (Sweden)

    Sandra Reichert

    2008-01-01

    Full Text Available Objective This paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds. Methods and material Review of the current medical and technological literature using Pubmed and personal experience. Results The study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms… Conclusion The next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.

  6. Active control of sound transmission through partitions composed of discretely controlled modules

    Science.gov (United States)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation

  7. 10 Hz Amplitude Modulated Sounds Induce Short-Term Tinnitus Suppression

    Directory of Open Access Journals (Sweden)

    Patrick Neff

    2017-05-01

    Full Text Available Objectives: Acoustic stimulation or sound therapy is proposed as a main treatment option for chronic subjective tinnitus. To further probe the field of acoustic stimulations for tinnitus therapy, this exploratory study compared 10 Hz amplitude modulated (AM sounds (two pure tones, noise, music, and frequency modulated (FM sounds and unmodulated sounds (pure tone, noise regarding their temporary suppression of tinnitus loudness. First, it was hypothesized that modulated sounds elicit larger temporary loudness suppression (residual inhibition than unmodulated sounds. Second, with manipulation of stimulus loudness and duration of the modulated sounds weaker or stronger effects of loudness suppression were expected, respectively.Methods: We recruited 29 participants with chronic tonal tinnitus from the multidisciplinary Tinnitus Clinic of the University of Regensburg. Participants underwent audiometric, psychometric and tinnitus pitch matching assessments followed by an acoustic stimulation experiment with a tinnitus loudness growth paradigm. In a first block participants were stimulated with all of the sounds for 3 min each and rated their subjective tinnitus loudness to the pre-stimulus loudness every 30 s after stimulus offset. The same procedure was deployed in the second block with the pure tone AM stimuli matched to the tinnitus frequency, manipulated in length (6 min, and loudness (reduced by 30 dB and linear fade out. Repeated measures mixed model analyses of variance (ANOVA were calculated to assess differences in loudness growth between the stimuli for each block separately.Results: First, we found that all sounds elicit a short-term suppression of tinnitus loudness (seconds to minutes with strongest suppression right after stimulus offset [F(6, 1331 = 3.74, p < 0.01]. Second, similar to previous findings we found that AM sounds near the tinnitus frequency produce significantly stronger tinnitus loudness suppression than noise [vs. Pink

  8. Field study of sound exposure by personal stereo

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. High levels can be produced by most commercially available mp3 players, and they are generally used in high background noise levels (i.e., while in a bus or rain). A field study...... on young people's habitual sound exposure to personal stereos has been carried out using a measurement method according to principles of ISO 11904-2:2004. Additionally the state of their hearing has also been assessed. This presentation deals with the methodological aspects relating to the quantification...... of habitual use, estimation of listening levels and exposure levels, and assessment of their state of hearing, by either threshold determination or OAE measurement, with a special view to the general validity of the results (uncertainty factors and their magnitude)....

  9. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  10. Operator performance and annunciation sounds

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B K; Bradley, M T; Artiss, W G [Human Factors Practical, Dipper Harbour, NB (Canada)

    1998-12-31

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author) 3 refs.

  11. Effect of temperature on density, sound velocity, and their derived properties for the binary systems glycerol with water or alcohols

    International Nuclear Information System (INIS)

    Negadi, Latifa; Feddal-Benabed, Badra; Bahadur, Indra; Saab, Joseph; Zaoui-Djelloul-Daouadji, Manel; Ramjugernath, Deresh; Negadi, Amina

    2017-01-01

    Highlights: • Densities (ρ) and sound velocities (u) for glycerol, +water, +methanol, or +ethanol systems were measured. • The derived properties (excess molar volume, isentropic compressibility and deviation in isentropic compressibility) were calculated. • The Redlich–Kister polynomial was used to fit the experimental results. - Abstract: Densities and sound velocities of three binary systems containing glycerol + (water, methanol, or ethanol) have been measured over the entire composition range at temperatures ranging from (283.15 to 313.15) K in 10 K intervals, at atmospheric pressure. A vibrating u-tube densimeter and sound velocity analyzer (Anton Paar DSA 5000M) was used for the measurements. Thermodynamic properties were derived from the measured data, viz. excess molar volume, isentropic compressibility, and deviation in isentropic compressibility. The property data were correlated with the Redlich-Kister polynomial. In all cases, the excess molar volumes and deviation in isentropic compressibility are negative over the entire composition range for all binary mixtures studied and become increasingly negative with an increase in the temperature. These properties provide important information about different interactions that take place between like-like, like-unlike and unlike-unlike molecules in the mixtures.

  12. The role of sound in the sensation of ownership of a pair of virtual wings in immersive VR

    DEFF Research Database (Denmark)

    Sikström, Erik; Götzen, Amalia De; Serafin, Stefania

    2014-01-01

    This paper describes an evaluation of the role of self-produced sounds in participants' sensation of ownership and control of virtual wings in an immersive virtual reality scenario where the participants were asked to complete an obstacle course flight while exposed to four different sound condit...

  13. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  14. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  15. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. SOUND LABOR RELATIONS AT ENTERPRISE LEVEL IN THAILAND

    Directory of Open Access Journals (Sweden)

    Vichai Thosuwonchinda

    2016-07-01

    Full Text Available The objective of this research was to study the pattern of sound labor relations in Thailand in order to reduce conflicts between employers and workers and create cooperation. The research was based on a qualitative approach, using in-depth interview with 10 stakeholder groups of Thai industrial relations system. They were employees of non unionized companies at the shop floor level, employees of non unionized companies at the supervisor level, trade union leaders at the company level, trade union leaders at the national level, employers of non-unionized companies, employers’ organization leaders, and human resource managers, members of tripartite bodies, government officials and labor academics. The findings were presented in a model identifying 5 characteristics that enhance sound relations in Thailand, i.e. recognition between employer and workers, good communication, trust, data revealing and workers’ participation. It was suggested that all parties, employers, workers and the government should take part in the promotion of sound labor relations. The employer have to acknowledge labor union with a positive attitude, have good communication with workers , create trust with workers, disclose information, create culture of mutual benefits as well as accept sincerely the system that include workers’ participation. Workers need a strong labor union, good and sincere representatives for clear communication, trust, mutual benefits and seek conflict solutions with employer by win-win strategy. The government has a supporting role in adjusting the existing laws in the appropriate way, by creating policy for sound labor relations, and putting the idea of sound labor relations into practice.

  17. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  18. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  19. Sound transmission through triple-panel structures lined with poroelastic materials

    OpenAIRE

    Liu, Y

    2015-01-01

    In this paper, previous theories on the prediction of sound transmission loss for a double-panel structure lined with poroelastic materials are extended to address the problem of a triple-panel structure. Six typical configurations are considered for a triple-panel structure based on the method of coupling the porous layers to the facing panels which determines critically the sound insulation performance of the system. The transfer matrix method is employed to solve the system by applying app...

  20. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  1. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  2. Towards an open sound card

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digital-to-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can con...

  3. Sound-based assistive technology support to hearing, speaking and seeing

    CERN Document Server

    Ifukube, Tohru

    2017-01-01

    This book "Sound-based Assistive Technology" explains a technology to help speech-, hearing- and sight-impaired people. They might benefit in some way from an enhancement in their ability to recognize and produce speech or to detect sounds in their surroundings. Additionally, it is considered how sound-based assistive technology might be applied to the areas of speech recognition, speech synthesis, environmental recognition, virtual reality and robots. It is the primary focus of this book to provide an understanding of both the methodology and basic concepts of assistive technology rather than listing the variety of assistive devices developed in Japan or other countries. Although this book presents a number of different topics, they are sufficiently independent from one another that the reader may begin at any chapter without experiencing confusion. It should be acknowledged that much of the research quoted in this book was conducted in the author's laboratories both at Hokkaido University and the University...

  4. Performance analysis of an IMU-augmented GNSS tracking system on board the MAIUS-1 sounding rocket

    Science.gov (United States)

    Braun, Benjamin; Grillenberger, Andreas; Markgraf, Markus

    2018-05-01

    Satellite navigation receivers are adequate tracking sensors for range safety of both orbital launch vehicles and suborbital sounding rockets. Due to high accuracy and its low system complexity, satellite navigation is seen as well-suited supplement or replacement of conventional tracking systems like radar. Having the well-known shortcomings of satellite navigation like deliberate or unintentional interferences in mind, it is proposed to augment the satellite navigation receiver by an inertial measurement unit (IMU) to enhance continuity and availability of localization. The augmented receiver is thus enabled to output at least an inertial position solution in case of signal outages. In a previous study, it was shown by means of simulation using the example of Ariane 5 that the performance of a low-grade microelectromechanical IMU is sufficient to bridge expected outages of some ten seconds, and still meeting the range safety requirements in effect. In this publication, these theoretical findings shall be substantiated by real flight data that were recorded on MAIUS-1, a sounding rocket launched from Esrange, Sweden, in early 2017. The analysis reveals that the chosen representative of a microelectromechanical IMU is suitable to bridge outages of up to thirty seconds.

  5. Sound creation and artistic language hybridization through the use of the collaborative creation system: Soundcool

    OpenAIRE

    Berbel-Gómez, Noemy; Murillo-Ribes, Adolf; Sastre-Martínez, Jorge; Riaño Galán, María Elena

    2017-01-01

    Abstract: We submit the development of a collaborative sound creation proposal made reality using the Soundcool system from its initial design phase to the scenic performance at the International Festival of Contemporary Music ENSEMS, Valencia (Spain). The "interstellar machine", a transdisciplinary piece whose linking thread is a story, is characterized by hybridization of languages and artistic fusion.It's a piece made possible by the joint work between students of Primary and Secondary Edu...

  6. Active Noise Control Experiments using Sound Energy Flu

    Science.gov (United States)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  7. Characterization of sound emitted by wind machines used for frost control

    Energy Technology Data Exchange (ETDEWEB)

    Gambino, V.; Gambino, T. [Aercoustics Engineering Ltd., Toronto, ON (Canada); Fraser, H.W. [Ontario Ministry of Agriculture, Food and Rural Affairs, Vineland, ON (Canada)

    2007-07-01

    Wind machines are used in Niagara-on-the-Lake to protect cold-sensitive crops against cold injury during winter's extreme cold temperatures,spring's late frosts and autumn's early frosts. The number of wind machines in Ontario has about doubled annually from only a few in the late 1990's, to more than 425 in 2006. They are not used for generating power. Noise complaints have multiplied as the number of wind machines has increased. The objective of this study was to characterize the sound produced by wind machines; learn why residents are annoyed by wind machine noise; and suggest ways to possibly reduce sound emissions. One part of the study explored acoustic emission characteristics, the sonic differences of units made by different manufacturers, sound propagation properties under typical use atmospheric conditions and low frequency noise impact potential. Tests were conducted with a calibrated Larson Davis 2900B portable spectrum analyzer. Sound was measured with a microphone whose frequency response covered the range 4 Hz to 20 kHz. The study examined and found several unique acoustic properties that are characteristic of wind machines. It was determined that noise from wind machines is due to both aerodynamic and mechanical effects, but aerodynamic sounds were found to be the most significant. It was concluded that full range or broadband sounds manifest themselves as noise components that extend throughout the audible frequency range from the bladepass frequency to upwards of 1000 Hz. The sound spectrum of a wind machine is full natural tones and impulses that give it a readily identifiable acoustic character. Atmospheric conditions including temperature, lapse rate, relative humidity, mild winds, gradients and atmospheric turbulence all play a significant role in the long range outdoor propagation of sound from wind machines. 6 refs., 6 figs.

  8. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  9. Stabilized platform for tethered balloon soundings of broadband long- and short-wave radiation

    International Nuclear Information System (INIS)

    Alzheimer, J.M.; Anderson, G.A.; Whiteman, C.D.

    1993-01-01

    Changes in the composition of trace gases in the earth's atmosphere have been reported by many observers, and a general concern has been expressed regarding possible changes to the earth's climate that may be caused by radiatively active gases introduced into the earth's atmosphere by man's activities. Radiatively active trace gases produce temperature changes in the earth's atmosphere through changes in radiative flux divergence. Our knowledge of and means of measuring radiative flux divergence is very limited. A few observations of vertical radiative flux divergences have been reported from aircraft from radiometersondes from towers and from large tethered balloons. These measurement techniques suffers from one or more drawbacks, including shallow sounding depths (towers), high cost (aircraft), complicated logistics (large tethered balloons), and limitation to nighttime hours (radiometersondes). Changes in radiative flux divergence caused by anthropogenic trace gases are expected to be quite small, and will be difficult to measure with existing broadband radiative flux instruments. The emphasis of present research in global climate change is thus being focused on improving radiative transfer algorithms in global climate models. The radiative parameterizations in these models are at an early stage of development and information is needed regarding their performance, especially in cloudy conditions. The impetus for the research reported in this paper is the need for a device that can supplement existing means of measuring vertical profiles of long- and short-wave irradiance and radiative flux divergence. We have designed a small tethered-balloon-based system that can make radiometric soundings through the atmospheric boundary layer. This paper discusses the concept, the design considerations, and the design and construction of this sounding system. The performance of the system will be tested in a series of balloon flights scheduled for the fall and winter of 1992

  10. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  11. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, M. C.; Wang, L. M.; Rindel, Jens Holger

    2004-01-01

    time. However, for the three other parameters evaluated (sound-pressure level, clarity index, and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity when using computer......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  12. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  13. Research on fiber Bragg grating heart sound sensing and wavelength demodulation method

    Science.gov (United States)

    Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang

    2010-11-01

    Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.

  14. Recognition and characterization of unstructured environmental sounds

    Science.gov (United States)

    Chu, Selina

    2011-12-01

    Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply

  15. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  16. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  17. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  18. Silent oceans: ocean acidification impoverishes natural soundscapes by altering sound production of the world's noisiest marine invertebrate.

    Science.gov (United States)

    Rossi, Tullio; Connell, Sean D; Nagelkerken, Ivan

    2016-03-16

    Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. © 2016 The Author(s).

  19. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  20. How male sound pressure level influences phonotaxis in virgin female Jamaican field crickets (Gryllus assimilis

    Directory of Open Access Journals (Sweden)

    Karen Pacheco

    2014-06-01

    Full Text Available Understanding female mate preference is important for determining the strength and direction of sexual trait evolution. The sound pressure level (SPL acoustic signalers use is often an important predictor of mating success because higher sound pressure levels are detectable at greater distances. If females are more attracted to signals produced at higher sound pressure levels, then the potential fitness impacts of signalling at higher sound pressure levels should be elevated beyond what would be expected from detection distance alone. Here we manipulated the sound pressure level of cricket mate attraction signals to determine how female phonotaxis was influenced. We examined female phonotaxis using two common experimental methods: spherical treadmills and open arenas. Both methods showed similar results, with females exhibiting greatest phonotaxis towards loud sound pressure levels relative to the standard signal (69 vs. 60 dB SPL but showing reduced phonotaxis towards very loud sound pressure level signals relative to the standard (77 vs. 60 dB SPL. Reduced female phonotaxis towards supernormal stimuli may signify an acoustic startle response, an absence of other required sensory cues, or perceived increases in predation risk.

  1. New developments in the surveillance and diagnostics technology for vibration, structure-borne sound and leakage monitoring systems

    International Nuclear Information System (INIS)

    Gloth, Gerrit

    2009-01-01

    Monitoring and diagnostic systems are of main importance for a safe and efficient operation of nuclear power plants. The author describes new developments with respect to vibration monitoring with a functional extension in the time domain for den secondary circuit, the development of a local system for the surveillance of rotating machines, the structure-borne sound monitoring with improvement of event analysis, esp. the loose part locating, leakage monitoring with a complete system for humidity measurement, and the development of a common platform for all monitoring and diagnostic systems, that allows an efficient access for comparison and cross references.

  2. Analysis of chewing sounds for dietary monitoring

    NARCIS (Netherlands)

    Amft, O.D.; Stäger, M.; Lukowicz, P.; Tröster, G.

    2005-01-01

    The paper reports the results of the first stage of our work on an automatic dietary monitoring system. The work is part of a large European project on using ubiquitous systems to support healthy lifestyle and cardiovascular disease prevention. We demonstrate that sound from the user's mouth can be

  3. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  4. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  5. Speed of Sound in Hadronic matter using Non-extensive Statistics

    CERN Document Server

    Khuntia, Arvind; Garg, Prakhar; Sahoo, Raghunath; Cleymans, Jean

    2016-01-01

    The speed of sound ($c_s$) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems, which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different $q$-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in non-extensive case for $c^{2}_s$ by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly present that the temperature where the mass cut-off starts, varies with the $q$-parameter which hints at a relation between the d...

  6. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  7. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  8. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  9. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  10. Otolith research for Puget Sound

    Science.gov (United States)

    Larsen, K.; Reisenbichler, R.

    2007-01-01

    Otoliths are hard structures located in the brain cavity of fish. These structures are formed by a buildup of calcium carbonate within a gelatinous matrix that produces light and dark bands similar to the growth rings in trees. The width of the bands corresponds to environmental factors such as temperature and food availability. As juvenile salmon encounter different environments in their migration to sea, they produce growth increments of varying widths and visible 'checks' corresponding to times of stress or change. The resulting pattern of band variations and check marks leave a record of fish growth and residence time in each habitat type. This information helps Puget Sound restoration by determining the importance of different habitats for the optimal health and management of different salmon populations. The USGS Western Fisheries Research Center (WFRC) provides otolith research findings directly to resource managers who put this information to work.

  11. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes

    OpenAIRE

    Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, ...

  12. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  13. Considerations for sound policy on investment in the forestry sector ...

    African Journals Online (AJOL)

    This paper examines the amount of real capital produced in terms of standing trees during some periods in the forestry sector of Osun and Oyo states with a view to considering sound policy on investment. Information were gathered through the use of primary and secondary data. The information obtained were analyzed ...

  14. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  15. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  16. Sound transmission through triple-panel structures lined with poroelastic materials

    Science.gov (United States)

    Liu, Yu

    2015-03-01

    In this paper, previous theories on the prediction of sound transmission loss for a double-panel structure lined with poroelastic materials are extended to address the problem of a triple-panel structure. Six typical configurations are considered for a triple-panel structure based on the method of coupling the porous layers to the facing panels which determines critically the sound insulation performance of the system. The transfer matrix method is employed to solve the system by applying appropriate types of boundary conditions for these configurations. The transmission loss of the triple-panel structures in a diffuse sound field is calculated as a function of frequency and compared with that of corresponding double-panel structures. Generally, the triple-panel structure with poroelastic linings has superior acoustic performance to the double-panel counterpart, remarkably in the mid-high frequency range and possibly at low frequencies, by selecting appropriate configurations in which those with two air gaps in the structure exhibit the best overall performance over the entire frequency range. The poroelastic lining significantly lowers the cut-on frequency above which the triple-panel structure exhibits noticeably higher transmission loss. Compared with a double-panel structure, the wider range of system parameters for a triple-panel structure due to the additional partition provides more design space for tuning the sound insulation performance. Despite the increased structural complexity, the triple-panel structure lined with poroelastic materials has the obvious advantages in sound transmission loss while without the penalties in weight and volume, and is hence a promising replacement for the widely used double-panel sandwich structure.

  17. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    Science.gov (United States)

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e

  18. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  19. DESIGN AND ENGINEERING BACKGROUND FOR STATION NETWORKS OF VERTICAL IONOSPHERE SOUNDING

    Directory of Open Access Journals (Sweden)

    A. Y. Grishentsev

    2013-05-01

    Full Text Available The paper deals with analysis of the network stations structure for ionosphere vertical sounding. Design features and creation principle of the program complexes for automated processing, analysis and storage of ionosphere sounding are considered. Conceptual model of complex database control system is created. The results of work are used in research practice of leading national organizations to study the ionosphere. Obtained application results of suggested algorithms and programs for automated processing and analysis of ionosphere vertical sounding are shown.

  20. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  1. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  2. Effect of ultrasonic, sonic and rotating-oscillating powered toothbrushing systems on surface roughness and wear of white spot lesions and sound enamel: An in vitro study.

    Science.gov (United States)

    Hernandé-Gatón, Patrícia; Palma-Dibb, Regina Guenka; Silva, Léa Assed Bezerra da; Faraoni, Juliana Jendiroba; de Queiroz, Alexandra Mussolino; Lucisano, Marília Pacífico; Silva, Raquel Assed Bezerra da; Nelson Filho, Paulo

    2018-04-01

    To evaluate the effect of ultrasonic, sonic and rotating-oscillating powered toothbrushing systems on surface roughness and wear of white spot lesions and sound enamel. 40 tooth segments obtained from third molar crowns had the enamel surface divided into thirds, one of which was not subjected to toothbrushing. In the other two thirds, sound enamel and enamel with artificially induced white spot lesions were randomly assigned to four groups (n=10) : UT: ultrasonic toothbrush (Emmi-dental); ST1: sonic toothbrush (Colgate ProClinical Omron); ST2: sonic toothbrush (Sonicare Philips); and ROT: rotating-oscillating toothbrush (control) (Oral-B Professional Care Triumph 5000 with SmartGuide). The specimens were analyzed by confocal laser microscopy for surface roughness and wear. Data were analyzed statistically by paired t-tests, Kruskal-Wallis, two-way ANOVA and Tukey's post-test (α= 0.05). The different powered toothbrushing systems did not cause a significant increase in the surface roughness of sound enamel (P> 0.05). In the ROT group, the roughness of white spot lesion surface increased significantly after toothbrushing and differed from the UT group (Pspot lesion compared with sound enamel, and this group differed significantly from the ST1 group (Pspot lesion increased surface roughness and wear. None of the powered toothbrushing systems (ultrasonic, sonic and rotating-oscillating) tested caused significant alterations on sound dental enamel. However, conventional rotating-oscillating toothbrushing on enamel with white spot lesion increased surface roughness and wear. Copyright©American Journal of Dentistry.

  3. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  4. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  5. Systems and methods for producing electrical discharges in compositions

    KAUST Repository

    Cha, Min Suk

    2015-09-03

    Systems and methods configured to produce electrical discharges in compositions, such as those, for example, configured to produce electrical discharges in compositions that comprise mixtures of materials, such as a mixture of a material having a high dielectric constant and a material having a low dielectric constant (e.g., a composition of a liquid having a high dielectric constant and a liquid having a low dielectric constant, a composition of a solid having a high dielectric constant and a liquid having a low dielectric constant, and similar compositions), and further systems and methods configured to produce materials, such as through material modification and/or material synthesis, in part, resulting from producing electrical discharges in compositions.

  6. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  7. Research and Design on Trigger System Based on Acoustic Delay Correlation Filtering

    Directory of Open Access Journals (Sweden)

    Zhiyong Lei

    2014-01-01

    Full Text Available In the exterior trajectory test, there usually needs a muzzle or a gun muzzle trigger system to be used as start signal for other measuring device, the customary trigger systems include off- target, infrared and acoustic detection system. But inherent echo reflection of the acoustic detection system makes the original signal of sound trigger submerged in various echo interference for bursts and shooting in a closed room, so that it can’t produce accurate trigger. In order to solve this defect, this paper analyzed the mathematical model based on acoustic delay correlation filtering in detail, then put forward the constraint condition with minimum path for delay correlation filtering. In this constraint condition, delay correlation filtering can do de-noising operation accurately. In order to verify accuracy and actual performance of the model, a MEMS sound sensor was used to implement mathematical model onto project, experimental results show that this system can filter out the every path sound bounce echoes of muzzle shock wave signal and produce the desired trigger signal accurately.

  8. Research on the application of active sound barriers for the transformer noise abatement

    Directory of Open Access Journals (Sweden)

    Hu Sheng

    2016-01-01

    Full Text Available Sound barriers are a type of measure most commonly used in the noise abatement of transformers. In the noise abatement project of substations, the design of sound barriers is restrained by the portal frames which are used to hold up outgoing lines from the main transformers, which impacts the noise reduction effect. If active sound barriers are utilized in these places, the noise diffraction of sound barriers can be effectively reduced. At a 110kV Substation, an experiment using a 15-channel active sound barrier has been carried out. The result of the experiment shows that the mean noise reduction value (MNRV of the noise measuring points at the substation boundary are 1.5 dB (A. The effect of the active noise control system is impacted by the layout of the active noise control system, the acoustic environment on site and the spectral characteristic of the target area.

  9. Effect of perforation on the sound transmission through a double-walled cylindrical shell

    Science.gov (United States)

    Zhang, Qunlin; Mao, Yijun; Qi, Datong

    2017-12-01

    An analytical model is developed to study the sound transmission loss through a general double-walled cylindrical shell system with one or two walls perforated, which is excited by a plane wave in the presence of external mean flow. The shell motion is governed by the classical Donnell's thin shell theory, and the mean particle velocity model is employed to describe boundary conditions at interfaces between the shells and fluid media. In contrast to the conventional solid double-walled shell system, numerical results show that perforating the inner shell in the transmission side improves sound insulation performance over a wide frequency band, and removes fluctuation of sound transmission loss with frequency at mid-frequencies in the absence of external flow. Both the incidence and azimuthal angles have nearly negligible effect on the sound transmission loss over the low and middle frequency range when perforating the inner shell. Width of the frequency band with continuous sound transmission loss can be tuned by the perforation ratio.

  10. Sound-proof Sandwich Panel Design via Metamaterial Concept

    Science.gov (United States)

    Sui, Ni

    Sandwich panels consisting of hollow core cells and two face-sheets bonded on both sides have been widely used as lightweight and strong structures in practical engineering applications, but with poor acoustic performance especially at low frequency regime. Basic sound-proof methods for the sandwich panel design are spontaneously categorized as sound insulation and sound absorption. Motivated by metamaterial concept, this dissertation presents two sandwich panel designs without sacrificing weight or size penalty: A lightweight yet sound-proof honeycomb acoustic metamateiral can be used as core material for honeycomb sandwich panels to block sound and break the mass law to realize minimum sound transmission; the other sandwich panel design is based on coupled Helmholtz resonators and can achieve perfect sound absorption without sound reflection. Based on the honeycomb sandwich panel, the mechanical properties of the honeycomb core structure were studied first. By incorporating a thin membrane on top of each honeycomb core, the traditional honeycomb core turns into honeycomb acoustic metamaterial. The basic theory for such kind of membrane-type acoustic metamaterial is demonstrated by a lumped model with infinite periodic oscillator system, and the negative dynamic effective mass density for clamped membrane is analyzed under the membrane resonance condition. Evanescent wave mode caused by negative dynamic effective mass density and impedance methods are utilized to interpret the physical phenomenon of honeycomb acoustic metamaterials at resonance. The honeycomb metamaterials can extraordinarily improve low-frequency sound transmission loss below the first resonant frequency of the membrane. The property of the membrane, the tension of the membrane and the numbers of attached membranes can impact the sound transmission loss, which are observed by numerical simulations and validated by experiments. The sandwich panel which incorporates the honeycomb metamateiral as

  11. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  12. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  13. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  14. Sensory illusions: Common mistakes in physics regarding sound, light and radio waves

    Science.gov (United States)

    Briles, T. M.; Tabor-Morris, A. E.

    2013-03-01

    Optical illusions are well known as effects that we see that are not representative of reality. Sensory illusions are similar but can involve other senses than sight, such as hearing or touch. One mistake commonly noted among instructors is that students often mis-identify radio signals as sound waves and not as part of the electromagnetic spectrum. A survey of physics students from multiple high schools highlights the frequency of this common misconception, as well as other nuances on this misunderstanding. Many students appear to conclude that, since they experience radio broadcasts as sound, then sound waves are the actual transmission of radio signals and not, as is actually true, a representation of those waves as produced by the translator box, the radio. Steps to help students identify and correct sensory illusion misconceptions are discussed. School of Education

  15. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    Science.gov (United States)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  16. Analytical Lie-algebraic solution of a 3D sound propagation problem in the ocean

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, P.S., E-mail: petrov@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Prants, S.V., E-mail: prants@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Petrova, T.N., E-mail: petrova.tn@dvfu.ru [Far Eastern Federal University, 8 Sukhanova str., 690950, Vladivostok (Russian Federation)

    2017-06-21

    The problem of sound propagation in a shallow sea with variable bottom slope is considered. The sound pressure field produced by a time-harmonic point source in such inhomogeneous 3D waveguide is expressed in the form of a modal expansion. The expansion coefficients are computed using the adiabatic mode parabolic equation theory. The mode parabolic equations are solved explicitly, and the analytical expressions for the modal coefficients are obtained using a Lie-algebraic technique. - Highlights: • A group-theoretical approach is applied to a problem of sound propagation in a shallow sea with variable bottom slope. • An analytical solution of this problem is obtained in the form of modal expansion with analytical expressions of the coefficients. • Our result is the only analytical solution of the 3D sound propagation problem with no translational invariance. • This solution can be used for the validation of the numerical propagation models.

  17. Robust Sounds of Activities of Daily Living Classification in Two-Channel Audio-Based Telemonitoring

    Directory of Open Access Journals (Sweden)

    David Maunder

    2013-01-01

    Full Text Available Despite recent advances in the area of home telemonitoring, the challenge of automatically detecting the sound signatures of activities of daily living of an elderly patient using nonintrusive and reliable methods remains. This paper investigates the classification of eight typical sounds of daily life from arbitrarily positioned two-microphone sensors under realistic noisy conditions. In particular, the role of several source separation and sound activity detection methods is considered. Evaluations on a new four-microphone database collected under four realistic noise conditions reveal that effective sound activity detection can produce significant gains in classification accuracy and that further gains can be made using source separation methods based on independent component analysis. Encouragingly, the results show that recognition accuracies in the range 70%–100% can be consistently obtained using different microphone-pair positions, under all but the most severe noise conditions.

  18. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  19. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  20. Mapping of sound scattering objects in the northern part of the Barents Sea and their geological interpretation

    Science.gov (United States)

    Sokolov, S. Yu.; Moroz, E. A.; Abramova, A. S.; Zarayskaya, Yu. A.; Dobrolubova, K. O.

    2017-07-01

    On cruises 25 (2007) and 28 (2011) of the R/V Akademik Nikolai Strakhov in the northern part of the Barents Sea, the Geological Institute, Russian Academy of Sciences, conducted comprehensive research on the bottom relief and upper part of the sedimentary cover profile under the auspices of the International Polar Year program. One of the instrument components was the SeaBat 8111 shallow-water multibeam echo sounder, which can map the acoustic field similarly to a side scan sonar, which records the response both from the bottom and from the water column. In the operations area, intense sound scattering objects produced by the discharge of deep fluid flows are detected in the water column. The sound scattering objects and pockmarks in the bottom relief are related to anomalies in hydrocarbon gas concentrations in bottom sediments. The sound scattering objects are localized over Triassic sequences outcropping from the bottom. The most intense degassing processes manifest themselves near the contact of the Triassic sequences and Jurassic clay deposits, as well as over deep depressions in a field of Bouguer anomalies related to the basement of the Jurassic-Cretaceous rift system