WorldWideScience

Sample records for sound signal required

  1. 33 CFR 67.20-10 - Sound signal.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signal. 67.20-10 Section 67... AIDS TO NAVIGATION ON ARTIFICIAL ISLANDS AND FIXED STRUCTURES Class âAâ Requirements § 67.20-10 Sound signal. (a) The owner of a Class “A” structure shall: (1) Install a sound signal that has a rated range...

  2. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  3. Analysis of acoustic sound signal for ONB measurement

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, H. I.; Han, K. Y.; Chai, H. T.; Park, C.

    2003-01-01

    The onset of nucleate boiling (ONB) was measured in a test fuel bundle composed of several fuel element simulators (FES) by analysing the aquatic sound signals. In order measure ONBs, a hydrophone, a pre-amplifier, and a data acquisition system to acquire/process the aquatic signal was prepared. The acoustic signal generated in the coolant is converted to the current signal through the microphone. When the signal is analyzed in the frequency domain, each sound signal can be identified according to its origin of sound source. As the power is increased to a certain degree, a nucleate boiling is started. The frequent formation and collapse of the void bubbles produce sound signal. By measuring this sound signal one can pinpoint the ONB. Since the signal characteristics is identical for different mass flow rates, this method can be applicable for ascertaining ONB

  4. 33 CFR 81.20 - Lights and sound signal appliances.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Lights and sound signal appliances. 81.20 Section 81.20 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY... appliances. Each vessel under the 72 COLREGS, except the vessels of the Navy, is exempt from the requirements...

  5. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    Science.gov (United States)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  6. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  8. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  9. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  10. Effect of sound on gap-junction-based intercellular signaling: Calcium waves under acoustic irradiation.

    Science.gov (United States)

    Deymier, P A; Swinteck, N; Runge, K; Deymier-Black, A; Hoying, J B

    2015-01-01

    We present a previously unrecognized effect of sound waves on gap-junction-based intercellular signaling such as in biological tissues composed of endothelial cells. We suggest that sound irradiation may, through temporal and spatial modulation of cell-to-cell conductance, create intercellular calcium waves with unidirectional signal propagation associated with nonconventional topologies. Nonreciprocity in calcium wave propagation induced by sound wave irradiation is demonstrated in the case of a linear and a nonlinear reaction-diffusion model. This demonstration should be applicable to other types of gap-junction-based intercellular signals, and it is thought that it should be of help in interpreting a broad range of biological phenomena associated with the beneficial therapeutic effects of sound irradiation and possibly the harmful effects of sound waves on health.

  11. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  12. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  13. Constructions complying with tightened Danish sound insulation requirements for new housing

    OpenAIRE

    Rasmussen, Birgit; Hoffmeyer, Dan

    2010-01-01

    New sound insulation requirements in Denmark in 2008 New Danish Building Regulations with tightened sound insulation requirements were introduced in 2008 (and in 2010 with unchanged acoustic requirements). Compared to the Building Regulations from 1995, the airborne sound insulation requirements were 2 –3 dB stricter and the impact sound insulation requirements 5 dB stricter. The limit values are given using the descriptors R’w and L’n,w as before. For the first time, acoustic requirements fo...

  14. Sound insulation requirements in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    All Nordic countries have sound insulation requirements for housing and sound classification schemes originating from a common INSTA‐proposal in the mid 90’s, but unfortunately being increasingly diversified since then. The present situation impedes development and create barriers for trade and e...

  15. Constructions complying with tightened Danish sound insulation requirements for new housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Hoffmeyer, Dan

    New sound insulation requirements in Denmark in 2008 New Danish Building Regulations with tightened sound insulation requirements were introduced in 2008 (and in 2010 with unchanged acoustic requirements). Compared to the Building Regulations from 1995, the airborne sound insulation requirements...... were 2 –3 dB stricter and the impact sound insulation requirements 5 dB stricter. The limit values are given using the descriptors R’w and L’n,w as before. For the first time, acoustic requirements for dwellings are not found as figures in the Building Regulations. Instead, it is stated......), Denmark. [2] "Lydisolering mellem boliger – Nybyggeri" (Sound insulation between dwellings – Newbuild)". Publication expected in April 2011. The guideline is a part of a series of seven new SBi acoustic guidelines. Project leader Birgit Rasmussen. The series shall replace the existing guidelines 1984...

  16. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  17. Root phonotropism: Early signalling events following sound perception in Arabidopsis roots.

    Science.gov (United States)

    Rodrigo-Moreno, Ana; Bazihizina, Nadia; Azzarello, Elisa; Masi, Elisa; Tran, Daniel; Bouteau, François; Baluska, Frantisek; Mancuso, Stefano

    2017-11-01

    Sound is a fundamental form of energy and it has been suggested that plants can make use of acoustic cues to obtain information regarding their environments and alter and fine-tune their growth and development. Despite an increasing body of evidence indicating that it can influence plant growth and physiology, many questions concerning the effect of sound waves on plant growth and the underlying signalling mechanisms remains unknown. Here we show that in Arabidopsis thaliana, exposure to sound waves (200Hz) for 2 weeks induced positive phonotropism in roots, which grew towards to sound source. We found that sound waves triggered very quickly (within  minutes) an increase in cytosolic Ca 2+ , possibly mediated by an influx through plasma membrane and a release from internal stock. Sound waves likewise elicited rapid reactive oxygen species (ROS) production and K + efflux. Taken together these results suggest that changes in ion fluxes (Ca 2+ and K + ) and an increase in superoxide production are involved in sound perception in plants, as previously established in animals. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    Science.gov (United States)

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  19. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  20. On the sound field requirements in the hearing protector standard ISO 4869-1

    DEFF Research Database (Denmark)

    Jensen, N. S.; Poulsen, Torben

    1999-01-01

    The sound field requirements in the ISO 4869 1 standard for hearing protector attenuation measurements comprise two parts: 1) a sound level difference requirement for positions around the head of the listener (ie at positions 15 cm from a reference point; up-down, front-back and left-right) and 2......) a directivity requirement for the sound incidence at the reference point, measured with a directional microphone, to ensure an approximate diffuse sound field. The level difference requirement (1) is not difficult to fulfil but the directivity requirement (2) may lead to contradicting results if the measurement...

  1. Stochastic Signal Processing for Sound Environment System with Decibel Evaluation and Energy Observation

    Directory of Open Access Journals (Sweden)

    Akira Ikuta

    2014-01-01

    Full Text Available In real sound environment system, a specific signal shows various types of probability distribution, and the observation data are usually contaminated by external noise (e.g., background noise of non-Gaussian distribution type. Furthermore, there potentially exist various nonlinear correlations in addition to the linear correlation between input and output time series. Consequently, often the system input and output relationship in the real phenomenon cannot be represented by a simple model using only the linear correlation and lower order statistics. In this study, complex sound environment systems difficult to analyze by using usual structural method are considered. By introducing an estimation method of the system parameters reflecting correlation information for conditional probability distribution under existence of the external noise, a prediction method of output response probability for sound environment systems is theoretically proposed in a suitable form for the additive property of energy variable and the evaluation in decibel scale. The effectiveness of the proposed stochastic signal processing method is experimentally confirmed by applying it to the observed data in sound environment systems.

  2. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  3. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  4. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  5. Sound card based digital correlation detection of weak photoelectrical signals

    International Nuclear Information System (INIS)

    Tang Guanghui; Wang Jiangcheng

    2005-01-01

    A simple and low-cost digital correlation method is proposed to investigate weak photoelectrical signals, using a high-speed photodiode as detector, which is directly connected to a programmably triggered sound card analogue-to-digital converter and a personal computer. Two testing experiments, autocorrelation detection of weak flickering signals from a computer monitor under background of noisy outdoor stray light and cross-correlation measurement of the surface velocity of a motional tape, are performed, showing that the results are reliable and the method is easy to implement

  6. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    International Nuclear Information System (INIS)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M

    2006-01-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions

  7. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M [Signal and Imaging Processing and Tele-Medicine Technology Research Group, Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)

    2006-04-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  8. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2015-09-01

    Full Text Available Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  9. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  10. Sound insulation of dwellings - Legal requirements in Europe and subjective evaluation of acoustical comfort

    DEFF Research Database (Denmark)

    Rasmussen, B.; Rindel, Jens Holger

    2003-01-01

    Acoustical comfort is a concept that can be characterised by absence of unwanted sound and by opportunities for acoustic activities without annoying other people. In order to achieve acoustical comfort in dwellings certain requirements have to be fulfilled concerning the airborne sound insulation...... requirement for sound insulation. The findings can also be used as a guide to specify acoustic requirements for dwellings in the future......., the impact sound insulation and the noise level from traffic and building services. For road traffic noise it is well established that an outdoor noise level LAeq, 24 h below 55 dB in a housing area means that approximately 15-20% of the occupants are annoyed by the noise. However, for sound insulation...

  11. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  12. 78 FR 2797 - Federal Motor Vehicle Safety Standards; Minimum Sound Requirements for Hybrid and Electric Vehicles

    Science.gov (United States)

    2013-01-14

    ... Sound Requirements for Hybrid and Electric Vehicles; Draft Environmental Assessment for Rulemaking To Establish Minimum Sound Requirements for Hybrid and Electric Vehicles; Proposed Rules #0;#0;Federal Register...-0148] RIN 2127-AK93 Federal Motor Vehicle Safety Standards; Minimum Sound Requirements for Hybrid and...

  13. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  14. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  15. 78 FR 2868 - Draft Environmental Assessment for Rulemaking To Establish Minimum Sound Requirements for Hybrid...

    Science.gov (United States)

    2013-01-14

    ... require hybrid and electric passenger cars, light trucks, medium and heavy duty trucks and buses, low... Sound Requirements for Hybrid and Electric Vehicles AGENCY: National Highway Traffic Safety... minimum sound requirements for hybrid and electric vehicles. DATES: Comments must be received on or before...

  16. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  17. Plant acoustics: in the search of a sound mechanism for sound signaling in plants.

    Science.gov (United States)

    Mishra, Ratnesh Chandra; Ghosh, Ritesh; Bae, Hanhong

    2016-08-01

    Being sessile, plants continuously deal with their dynamic and complex surroundings, identifying important cues and reacting with appropriate responses. Consequently, the sensitivity of plants has evolved to perceive a myriad of external stimuli, which ultimately ensures their successful survival. Research over past centuries has established that plants respond to environmental factors such as light, temperature, moisture, and mechanical perturbations (e.g. wind, rain, touch, etc.) by suitably modulating their growth and development. However, sound vibrations (SVs) as a stimulus have only started receiving attention relatively recently. SVs have been shown to increase the yields of several crops and strengthen plant immunity against pathogens. These vibrations can also prime the plants so as to make them more tolerant to impending drought. Plants can recognize the chewing sounds of insect larvae and the buzz of a pollinating bee, and respond accordingly. It is thus plausible that SVs may serve as a long-range stimulus that evokes ecologically relevant signaling mechanisms in plants. Studies have suggested that SVs increase the transcription of certain genes, soluble protein content, and support enhanced growth and development in plants. At the cellular level, SVs can change the secondary structure of plasma membrane proteins, affect microfilament rearrangements, produce Ca(2+) signatures, cause increases in protein kinases, protective enzymes, peroxidases, antioxidant enzymes, amylase, H(+)-ATPase / K(+) channel activities, and enhance levels of polyamines, soluble sugars and auxin. In this paper, we propose a signaling model to account for the molecular episodes that SVs induce within the cell, and in so doing we uncover a number of interesting questions that need to be addressed by future research in plant acoustics. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions

  18. Design, development and test of the gearbox condition monitoring system using sound signal processing

    Directory of Open Access Journals (Sweden)

    M Zamani

    2016-09-01

    Full Text Available Introduction One of the ways used for minimizing the cost of maintenance and repairs of rotating industrial equipment is condition monitoring using acoustic analysis. One of the most important problems which always have been under consideration in industrial equipment application is confidence possibility. Each dynamic, electrical, hydraulic or thermal system has certain characteristics which show the normal condition of the machine during function. Any changes of the characteristics can be a signal of a problem in the machine. The aim of condition monitoring is system condition determination using measurements of the signals of characteristics and using this information for system impairment prognostication. There are a lot of ways for condition monitoring of different systems, but sound analysis is accepted and used extensively as a method for condition investigation of rotating machines. The aim of this research is the design and construction of considered gearbox and using of obtaining data in frequency and time spectrum in order to analyze the sound and diagnosis. Materials and Methods This research was conducted at the department of mechanical biosystem workshop at Aboureihan College at Tehran University in February 15th.2015. In this research, in order to investigate the trend of diagnosis and gearbox condition, a system was designed and then constructed. The sound of correct and damaged gearbox was investigated by audiometer and stored in computer for data analysis. Sound measurement was done in three pinions speed of 749, 1050 and 1496 rpm and for correct gearboxes, damage of the fracture of a tooth and a tooth wear. Gearbox design and construction: In order to conduct the research, a gearbox with simple gearwheels was designed according to current needs. Then mentioned gearbox and its accessories were modeled in CATIA V5-R20 software and then the system was constructed. Gearbox is a machine that is used for mechanical power transition

  19. How male sound pressure level influences phonotaxis in virgin female Jamaican field crickets (Gryllus assimilis

    Directory of Open Access Journals (Sweden)

    Karen Pacheco

    2014-06-01

    Full Text Available Understanding female mate preference is important for determining the strength and direction of sexual trait evolution. The sound pressure level (SPL acoustic signalers use is often an important predictor of mating success because higher sound pressure levels are detectable at greater distances. If females are more attracted to signals produced at higher sound pressure levels, then the potential fitness impacts of signalling at higher sound pressure levels should be elevated beyond what would be expected from detection distance alone. Here we manipulated the sound pressure level of cricket mate attraction signals to determine how female phonotaxis was influenced. We examined female phonotaxis using two common experimental methods: spherical treadmills and open arenas. Both methods showed similar results, with females exhibiting greatest phonotaxis towards loud sound pressure levels relative to the standard signal (69 vs. 60 dB SPL but showing reduced phonotaxis towards very loud sound pressure level signals relative to the standard (77 vs. 60 dB SPL. Reduced female phonotaxis towards supernormal stimuli may signify an acoustic startle response, an absence of other required sensory cues, or perceived increases in predation risk.

  20. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  1. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  2. Exploring the perceived harshness of cello sounds by morphing and synthesis techniques.

    Science.gov (United States)

    Rozé, Jocelyn; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi

    2017-03-01

    Cello bowing requires a very fine control of the musicians' gestures to ensure the quality of the perceived sound. When the interaction between the bow hair and the string is optimal, the sound is perceived as broad and round. On the other hand, when the gestural control becomes more approximate, the sound quality deteriorates and often becomes harsh, shrill, and quavering. In this study, such a timbre degradation, often described by French cellists as harshness (décharnement), is investigated from both signal and perceptual perspectives. Harsh sounds were obtained from experienced cellists subjected to a postural constraint. A signal approach based on Gabor masks enabled us to capture the main dissimilarities between round and harsh sounds. Two complementary methods perceptually validated these signal features: First, a predictive regression model of the perceived harshness was built from sound continua obtained by a morphing technique. Next, the signal structures identified by the model were validated within a perceptual timbre space, obtained by multidimensional scaling analysis on pairs of synthesized stimuli controlled in harshness. The results revealed that the perceived harshness was due to a combination between a more chaotic harmonic behavior, a formantic emergence, and a weaker attack slope.

  3. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  4. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  5. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  6. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  7. Sound insulation between dwellings in multi-storey housing in Greenland - Need and feasibility of increased requirements?

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Thysell, Erik

    2014-01-01

    goals, one of them being check of fulfilment of the current regulatory sound insulation requirements, the other one being an evaluation of the possibilities to strengthen the requirements in the next building regulations. Sound insulation measurements between dwellings were made in three newly...

  8. Fluctuations of radio occultation signals in sounding the Earth's atmosphere

    Directory of Open Access Journals (Sweden)

    V. Kan

    2018-02-01

    Full Text Available We discuss the relationships that link the observed fluctuation spectra of the amplitude and phase of signals used for the radio occultation sounding of the Earth's atmosphere, with the spectra of atmospheric inhomogeneities. Our analysis employs the approximation of the phase screen and of weak fluctuations. We make our estimates for the following characteristic inhomogeneity types: (1 the isotropic Kolmogorov turbulence and (2 the anisotropic saturated internal gravity waves. We obtain the expressions for the variances of the amplitude and phase fluctuations of radio occultation signals as well as their estimates for the typical parameters of inhomogeneity models. From the GPS/MET observations, we evaluate the spectra of the amplitude and phase fluctuations in the altitude interval from 4 to 25 km in the middle and polar latitudes. As indicated by theoretical and experimental estimates, the main contribution into the radio signal fluctuations comes from the internal gravity waves. The influence of the Kolmogorov turbulence is negligible. We derive simple relationships that link the parameters of internal gravity waves and the statistical characteristics of the radio signal fluctuations. These results may serve as the basis for the global monitoring of the wave activity in the stratosphere and upper troposphere.

  9. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  10. Sound response of superheated drop bubble detectors to neutrons

    International Nuclear Information System (INIS)

    Gao Size; Chen Zhe; Liu Chao; Ni Bangfa; Zhang Guiying; Zhao Changfa; Xiao Caijin; Liu Cunxiong; Nie Peng; Guan Yongjing

    2012-01-01

    The sound response of the bubble detectors to neutrons by using 252 Cf neutron source was described. Sound signals were filtered by sound card and PC. The short-time signal energy. FFT spectrum, power spectrum, and decay time constant were got to determine the authenticity of sound signal for bubbles. (authors)

  11. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  12. Sound insulation of dwellings - Legal requirements in Europe and subjective evaluation of acoustical comfort

    OpenAIRE

    Rasmussen, Birgit; Rindel, Jens Holger

    2003-01-01

    Acoustical comfort is a concept that can be characterised by absence of unwanted sound and by opportunities for acoustic activities without annoying other people. In order to achieve acoustical comfort in dwellings certain requirements have to be fulfilled concerning the airborne sound insulation, the impact sound insulation and the noise level from traffic and building services.For road traffic noise it is well established that an outdoor noise level LAeq, 24 h below 55 dB in a housing area ...

  13. Time domain acoustic contrast control implementation of sound zones for low-frequency input signals

    DEFF Research Database (Denmark)

    Schellekens, Daan H. M.; Møller, Martin Bo; Olsen, Martin

    2016-01-01

    Sound zones are two or more regions within a listening space where listeners are provided with personal audio. Acoustic contrast control (ACC) is a sound zoning method that maximizes the average squared sound pressure in one zone constrained to constant pressure in other zones. State......-of-the-art time domain broadband acoustic contrast control (BACC) methods are designed for anechoic environments. These methods are not able to realize a flat frequency response in a limited frequency range within a reverberant environment. Sound field control in a limited frequency range is a requirement...... to accommodate the effective working range of the loudspeakers. In this paper, a new BACC method is proposed which results in an implementation realizing a flat frequency response in the target zone. This method is applied in a bandlimited low-frequency scenario where the loudspeaker layout surrounds two...

  14. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  15. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  16. Fetus Sound Stimulation: Cilia Memristor Effect of Signal Transduction

    Directory of Open Access Journals (Sweden)

    Svetlana Jankovic-Raznatovic

    2014-01-01

    Full Text Available Background. This experimental study evaluates fetal middle cerebral artery (MCA circulation after the defined prenatal acoustical stimulation (PAS and the role of cilia in hearing and memory and could explain signal transduction and memory according to cilia optical-acoustical properties. Methods. PAS was performed twice on 119 no-risk term pregnancies. We analyzed fetal MCA circulation before, after first and second PAS. Results. Analysis of the Pulsatility index basic (PIB and before PAS and Pulsatility index reactive after the first PAS (PIR 1 shows high statistical difference, representing high influence on the brain circulation. Analysis of PIB and Pulsatility index reactive after the second PAS (PIR 2 shows no statistical difference. Cilia as nanoscale structure possess magnetic flux linkage that depends on the amount of charge that has passed between two-terminal variable resistors of cilia. Microtubule resistance, as a function of the current through and voltage across the structure, leads to appearance of cilia memory with the “memristor” property. Conclusion. Acoustical and optical cilia properties play crucial role in hearing and memory processes. We suggest that fetuses are getting used to sound, developing a kind of memory patterns, considering acoustical and electromagnetically waves and involving cilia and microtubules and try to explain signal transduction.

  17. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  18. Towards parameter-free classification of sound effects in movies

    Science.gov (United States)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  19. METHODOLOGY FOR DETERMINATION OF SOUND INSULATION OF APARTMENTS’ ENCLOSING STRUCTURES TO MEET NOISE PROTECTION REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Giyasov Botir Iminzhonovich

    2017-10-01

    Full Text Available Subject: an important task in the design of internal enclosing structures of apartments is the establishment of their required soundproofing ability. At present, there is no reliable method for determining the required sound insulation and in this regard internal enclosures are designed without proper justification for noise protection. Research objectives: development of a technique for determining the required sound insulation of apartment’s internal enclosures to ensure an acceptable noise regime in the apartments’ rooms under the action of intra-apartment noise sources. Materials and methods: the methodology was developed on the basis of a statistical method for noise calculation in the apartments, treated as systems of acoustically coupled proportionate rooms, and with the help of a computer program that implements this method. Results: the technique makes it possible to generate, with the use of computer technologies, a targeted selection of internal enclosures of the apartment to meet their soundproofing requirements. Conclusions: the technique proposed in the article can be used at the design stage of apartments when determining the required soundproofing of partitions and doors. Using this technique, it is possible to harmonize the sound insulation ratio of individual elements among themselves and thereby guarantee a selection of internal structures for their acoustic and economic efficiency.

  20. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  1. 12 CFR 1.5 - Safe and sound banking practices; credit information required.

    Science.gov (United States)

    2010-01-01

    ... interest rate, credit, liquidity, price, foreign exchange, transaction, compliance, strategic, and... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Safe and sound banking practices; credit information required. 1.5 Section 1.5 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE...

  2. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    Science.gov (United States)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  3. ROS signalling – Specificity is required

    DEFF Research Database (Denmark)

    Møller, Ian Max; Sweetlove, Lee J

    2011-01-01

    The production of reactive oxygen species (ROS) increases in plants under stress. ROS can damage cellular components, but they can also act in signal transduction to help the cell counteract the oxidative damage in the stressed compartment. H2O2 may induce a general stress response, but it does...... messengers and regulate source-specific genes and in this way contribute to retrograde ROS signalling during oxidative stress. (This is a new project funded by FNU) References: Møller, I.M. & Sweetlove, L.J. 2010. ROS signalling – Specificity is required. Trends Plant Sci. 15: 370-374...... not have the required specificity to selectively regulate nuclear genes required for dealing with localized stress, e.g., in chloroplasts or mitochondria. We here argue that peptides deriving from proteolytic breakdown of oxidatively damaged proteins have the requisite specificity to act as secondary ROS...

  4. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  5. Assessing signal-driven mechanism in neonates: brain responses to temporally and spectrally different sounds

    Directory of Open Access Journals (Sweden)

    Yasuyo eMinagawa-Kawai

    2011-06-01

    Full Text Available Past studies have found that in adults that acoustic properties of sound signals (such as fast vs. slow temporal features differentially activate the left and right hemispheres, and some have hypothesized that left-lateralization for speech processing may follow from left-lateralization to rapidly changing signals. Here, we tested whether newborns’ brains show some evidence of signal-specific lateralization responses using near-infrared spectroscopy (NIRS and auditory stimuli that elicits lateralized responses in adults, composed of segments that vary in duration and spectral diversity. We found significantly greater bilateral responses of oxygenated hemoglobin (oxy-Hb in the temporal areas for stimuli with a minimum segment duration of 21 ms, than stimuli with a minimum segment duration of 667 ms. However, we found no evidence for hemispheric asymmetries dependent on the stimulus characteristics. We hypothesize that acoustic-based functional brain asymmetries may develop throughout early infancy, and discuss their possible relationship with brain asymmetries for language.

  6. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  7. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  8. [Mechanism of the constant representation of the position of a sound signal source by the cricket cercal system neurons].

    Science.gov (United States)

    Rozhkova, G I; Polishcuk, N A

    1976-01-01

    Previously it has been shown that some abdominal giant neurones of the cricket have constant preffered directions of sound stimulation in relation not to the cerci (the organs bearing sound receptors) but to the insect body (fig. 1) [1]. Now it is found that the independence of directional sensitivity of giant neurones on the cerci position disappears after cutting all structures connecting the cerci to the body (except cercal nerves) (fig 2). Therefore the constancy of directional sensitivity of the giant nerones is provided by proprioceptive signals about cerci position.

  9. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  10. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  11. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  12. The Sound Quality of Cochlear Implants: Studies With Single-sided Deaf Patients.

    Science.gov (United States)

    Dorman, Michael F; Natale, Sarah Cook; Butts, Austin M; Zeitler, Daniel M; Carlson, Matthew L

    2017-09-01

    The goal of the present study was to assess the sound quality of a cochlear implant for single-sided deaf (SSD) patients fit with a cochlear implant (CI). One of the fundamental, unanswered questions in CI research is "what does an implant sound like?" Conventional CI patients must use the memory of a clean signal, often decades old, to judge the sound quality of their CIs. In contrast, SSD-CI patients can rate the similarity of a clean signal presented to the CI ear and candidate, CI-like signals presented to the ear with normal hearing. For Experiment 1 four types of stimuli were created for presentation to the normal hearing ear: noise vocoded signals, sine vocoded signals, frequency shifted, sine vocoded signals and band-pass filtered, natural speech signals. Listeners rated the similarity of these signals to unmodified signals sent to the CI on a scale of 0 to 10 with 10 being a complete match to the CI signal. For Experiment 2 multitrack signal mixing was used to create natural speech signals that varied along multiple dimensions. In Experiment 1 for eight adult SSD-CI listeners, the best median similarity rating to the sound of the CI for noise vocoded signals was 1.9; for sine vocoded signals 2.9; for frequency upshifted signals, 1.9; and for band pass filtered signals, 5.5. In Experiment 2 for three young listeners, combinations of band pass filtering and spectral smearing lead to ratings of 10. The sound quality of noise and sine vocoders does not generally correspond to the sound quality of cochlear implants fit to SSD patients. Our preliminary conclusion is that natural speech signals that have been muffled to one degree or another by band pass filtering and/or spectral smearing provide a close, but incomplete, match to CI sound quality for some patients.

  13. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  14. Constraints on decay of environmental sound memory in adult rats.

    Science.gov (United States)

    Sakai, Masashi

    2006-11-27

    When adult rats are pretreated with a 48-h-long 'repetitive nonreinforced sound exposure', performance in two-sound discriminative operant conditioning transiently improves. We have already proven that this 'sound exposure-enhanced discrimination' is dependent upon enhancement of the perceptual capacity of the auditory cortex. This study investigated principles governing decay of sound exposure-enhanced discrimination decay. Sound exposure-enhanced discrimination disappeared within approximately 72 h if animals were deprived of environmental sounds after sound exposure, and that shortened to less than approximately 60 h if they were exposed to environmental sounds in the animal room. Sound-deprivation itself exerted no clear effects. These findings suggest that the memory of a passively exposed behaviorally irrelevant sound signal does not merely pass along the intrinsic lifetime but also gets deteriorated by other incoming signals.

  15. ROS signalling - specificity is required

    DEFF Research Database (Denmark)

    Møller, Ian M; Sweetlove, Lee J

    2010-01-01

    Reactive oxygen species (ROS) production increases in plants under stress. ROS can damage cellular components, but they can also act in signal transduction to help the cell counteract the oxidative damage in the stressed compartment. H2O2 might induce a general stress response, but it does not have...... the required specificity to selectively regulate nuclear genes required for dealing with localized stress, e.g. in chloroplasts or mitochondria. Here we argue that peptides deriving from proteolytic breakdown of oxidatively damaged proteins have the requisite specificity to act as secondary ROS messengers...... and regulate source-specific genes and in this way contribute to retrograde ROS signalling during oxidative stress. Likewise, unmodified peptides deriving from the breakdown of redundant proteins could help coordinate organellar and nuclear gene expression...

  16. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  17. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  18. Directional resolution of head-related transfer functions required in binaural synthesis

    DEFF Research Database (Denmark)

    Minnaar, Pauli; Plogsties, Jan; Christensen, Flemming

    2005-01-01

    In binaural synthesis a virtual sound source is implemented by convolving an anechoic signal with a pair of head-related transfer functions (HRTFs). In order to represent all possible directions of the sound source with respect to the listener a discrete number of HRTFs are measured and interpola......In binaural synthesis a virtual sound source is implemented by convolving an anechoic signal with a pair of head-related transfer functions (HRTFs). In order to represent all possible directions of the sound source with respect to the listener a discrete number of HRTFs are measured...... and moving sound sources. A criterion was found that predicts the experimental results. This criterion was used to estimate the directional resolution required in binaural synthesis for all directions on the sphere around the head....

  19. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  20. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  1. Design of Meter-Scale Antenna and Signal Detection System for Underground Magnetic Resonance Sounding in Mines.

    Science.gov (United States)

    Yi, Xiaofeng; Zhang, Jian; Fan, Tiehu; Tian, Baofeng; Jiang, Chuandong

    2018-03-13

    Magnetic resonance sounding (MRS) is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS) detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7-9 m in front of the wall for underground mining projects.

  2. Design of Meter-Scale Antenna and Signal Detection System for Underground Magnetic Resonance Sounding in Mines

    Directory of Open Access Journals (Sweden)

    Xiaofeng Yi

    2018-03-01

    Full Text Available Magnetic resonance sounding (MRS is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7–9 m in front of the wall for underground mining projects.

  3. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  4. Vibrotactile Identification of Signal-Processed Sounds from Environmental Events Presented by a Portable Vibrator: A Laboratory Study

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: To evaluate different signal-processing algorithms for tactile identification of environmental sounds in a monitoring aid for the deafblind. Two men and three women, sensorineurally deaf or profoundly hearing impaired with experience of vibratory experiments, age 22-36 years. Methods: A closed set of 45 representative environmental sounds were processed using two transposing (TRHA, TR1/3 and three modulating algorithms (AM, AMFM, AMMC and presented as tactile stimuli using a portable vibrator in three experiments. The algorithms TRHA, TR1/3, AMFM and AMMC had two alternatives (with and without adaption to vibratory thresholds. In Exp. 1, the sounds were preprocessed and directly fed to the vibrator. In Exp. 2 and 3, the sounds were presented in an acoustic test room, without or with background noise (SNR=+5 dB, and processed in real time. Results: In Exp. 1, Algorithm AMFM and AMFM(A consistently had the lowest identification scores, and were thus excluded in Exp. 2 and 3. TRHA, AM, AMMC, and AMMC(A showed comparable identification scores (30%-42% and the addition of noise did not deteriorate the performance. Discussion: Algorithm TRHA, AM, AMMC, and AMMC(A showed good performance in all three experiments and were robust in noise they can therefore be used in further testing in real environments.

  5. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  6. Vibrotactile Detection, Identification and Directional Perception of signal-Processed Sounds from Environmental Events: A Pilot Field Evaluation in Five Cases

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: Conducting field tests of a vibrotactile aid for deaf/deafblind persons for detection, identification and directional perception of environmental sounds. Methods: Five deaf (3F/2M, 22–36 years individuals tested the aid separately in a home environment (kitchen and in a traffic environment. Their eyes were blindfolded and they wore a headband and holding a vibrator for sound identification. In the headband, three microphones were mounted and two vibrators for signalling direction of the sound source. The sounds originated from events typical for the home environment and traffic. The subjects were inexperienced (events unknown and experienced (events known. They identified the events in a home and traffic environment, but perceived sound source direction only in traffic. Results: The detection scores were higher than 98% both in the home and in the traffic environment. In the home environment, identification scores varied between 25%-58% when the subjects were inexperienced and between 33%-83% when they were experienced. In traffic, identification scores varied between 20%-40% when the subjects were inexperienced and between 22%-56% when they were experienced. The directional perception scores varied between 30%-60% when inexperienced and between 61%-83% when experienced. Discussion: The vibratory aid consistently improved all participants’ detection, identification and directional perception ability.

  7. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  8. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  9. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  10. Acoustic cardiac signals analysis: a Kalman filter–based approach

    Directory of Open Access Journals (Sweden)

    Salleh SH

    2012-06-01

    Full Text Available Sheik Hussain Salleh,1 Hadrina Sheik Hussain,2 Tan Tian Swee,2 Chee-Ming Ting,2 Alias Mohd Noor,2 Surasak Pipatsart,3 Jalil Ali,4 Preecha P Yupapin31Department of Biomedical Instrumentation and Signal Processing, Universiti Teknologi Malaysia, Skudai, Malaysia; 2Centre for Biomedical Engineering Transportation Research Alliance, Universiti Teknologi Malaysia, Johor Bahru, Malaysia; 3Nanoscale Science and Engineering Research Alliance, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand; 4Institute of Advanced Photonics Science, Universiti Teknologi Malaysia, Johor Bahru, MalaysiaAbstract: Auscultation of the heart is accompanied by both electrical activity and sound. Heart auscultation provides clues to diagnose many cardiac abnormalities. Unfortunately, detection of relevant symptoms and diagnosis based on heart sound through a stethoscope is difficult. The reason GPs find this difficult is that the heart sounds are of short duration and separated from one another by less than 30 ms. In addition, the cost of false positives constitutes wasted time and emotional anxiety for both patient and GP. Many heart diseases cause changes in heart sound, waveform, and additional murmurs before other signs and symptoms appear. Heart-sound auscultation is the primary test conducted by GPs. These sounds are generated primarily by turbulent flow of blood in the heart. Analysis of heart sounds requires a quiet environment with minimum ambient noise. In order to address such issues, the technique of denoising and estimating the biomedical heart signal is proposed in this investigation. Normally, the performance of the filter naturally depends on prior information related to the statistical properties of the signal and the background noise. This paper proposes Kalman filtering for denoising statistical heart sound. The cycles of heart sounds are certain to follow first-order Gauss–Markov process. These cycles are observed with additional noise

  11. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    Science.gov (United States)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  12. A review of intelligent systems for heart sound signal analysis.

    Science.gov (United States)

    Nabih-Ali, Mohammed; El-Dahshan, El-Sayed A; Yahia, Ashraf S

    2017-10-01

    Intelligent computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. CAD systems could provide physicians with a suggestion about the diagnostic of heart diseases. The objective of this paper is to review the recent published preprocessing, feature extraction and classification techniques and their state of the art of phonocardiogram (PCG) signal analysis. Published literature reviewed in this paper shows the potential of machine learning techniques as a design tool in PCG CAD systems and reveals that the CAD systems for PCG signal analysis are still an open problem. Related studies are compared to their datasets, feature extraction techniques and the classifiers they used. Current achievements and limitations in developing CAD systems for PCG signal analysis using machine learning techniques are presented and discussed. In the light of this review, a number of future research directions for PCG signal analysis are provided.

  13. Sound-contingent visual motion aftereffect

    Directory of Open Access Journals (Sweden)

    Kobayashi Maori

    2011-05-01

    Full Text Available Abstract Background After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion, one of the signals (color becomes a driver for the other signal (motion. This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound. Results Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days. Conclusions These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.

  14. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  15. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  16. Computer soundcard as an AC signal generator and oscilloscope for the physics laboratory

    Science.gov (United States)

    Sinlapanuntakul, Jinda; Kijamnajsuk, Puchong; Jetjamnong, Chanthawut; Chotikaprakhan, Sutharat

    2018-01-01

    The purpose of this paper is to develop both an AC signal generator and a dual-channel oscilloscope based on standard personal computer equipped with sound card as parts of the laboratory of the fundamental physics and the introduction to electronics classes. The setup turns the computer into the two channel measured device which can provides sample rate, simultaneous sampling, frequency range, filters and others essential capabilities required to perform amplitude, phase and frequency measurements of AC signal. The AC signal also generate from the same computer sound card output simultaneously in any waveform such as sine, square, triangle, saw-toothed pulsed, swept sine and white noise etc. These can convert an inexpensive PC sound card into powerful device, which allows the students to measure physical phenomena with their own PCs either at home or at university attendance. A graphic user interface software was developed for control and analysis, including facilities for data recording, signal processing and real time measurement display. The result is expanded utility of self-learning for the students in the field of electronics both AC and DC circuits, including the sound and vibration experiments.

  17. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  18. Ultrasound sounding in air by fast-moving receiver

    Science.gov (United States)

    Sukhanov, D.; Erzakova, N.

    2018-05-01

    A method of ultrasound imaging in the air for a fast receiver. The case, when the speed of movement of the receiver can not be neglected with respect to the speed of sound. In this case, the Doppler effect is significant, making it difficult for matched filtering of the backscattered signal. The proposed method does not use a continuous repetitive noise-sounding signal. generalized approach applies spatial matched filtering in the time domain to recover the ultrasonic tomographic images.

  19. Rainforests as concert halls for birds: Are reverberations improving sound transmission of long song elements?

    DEFF Research Database (Denmark)

    Nemeth, Erwin; Dabelsteen, Torben; Pedersen, Simon Boel

    2006-01-01

    that longer sounds are less attenuated. The results indicate that higher sound pressure level is caused by superimposing reflections. It is suggested that this beneficial effect of reverberations explains interspecific birdsong differences in element length. Transmission paths with stronger reverberations......In forests reverberations have probably detrimental and beneficial effects on avian communication. They constrain signal discrimination by masking fast repetitive sounds and they improve signal detection by elongating sounds. This ambivalence of reflections for animal signals in forests is similar...... to the influence of reverberations on speech or music in indoor sound transmission. Since comparisons of sound fields of forests and concert halls have demonstrated that reflections can contribute in both environments a considerable part to the energy of a received sound, it is here assumed that reverberations...

  20. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    Science.gov (United States)

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  1. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  2. Sound Cross-synthesis and Morphing Using Dictionary-based Methods

    DEFF Research Database (Denmark)

    Collins, Nick; Sturm, Bob L.

    2011-01-01

    Dictionary-based methods (DBMs) provide rich possibilities for new sound transformations; as the analysis dual to granular synthesis, audio signals are decomposed into `atoms', allowing interesting manipulations. We present various approaches to audio signal cross-synthesis and cross-analysis via...... atomic decomposition using scale-time-frequency dictionaries. DBMs naturally provide high-level descriptions of a signal and its content, which can allow for greater control over what is modified and how. Through these models, we can make one signal decomposition influence that of another to create cross......-synthesized sounds. We present several examples of these techniques both theoretically and practically, and present on-going and further work....

  3. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  4. Developing a reference of normal lung sounds in healthy Peruvian children.

    Science.gov (United States)

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  5. Different Types of Sounds and Their Relationship With the Electrocardiographic Signals and the Cardiovascular System – Review

    Directory of Open Access Journals (Sweden)

    Ennio H. Idrobo-Ávila

    2018-05-01

    Full Text Available Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals.Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them.Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise.Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could

  6. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  7. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  8. Dementias show differential physiological responses to salient sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  9. Digital servo control of random sound fields

    Science.gov (United States)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  10. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  11. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  12. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    Science.gov (United States)

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  13. Two models of the sound-signal frequency dependence on the animal body size as exemplified by the ground squirrels of Eurasia (mammalia, rodentia).

    Science.gov (United States)

    Nikol'skii, A A

    2017-11-01

    Dependence of the sound-signal frequency on the animal body length was studied in 14 ground squirrel species (genus Spermophilus) of Eurasia. Regression analysis of the total sample yielded a low determination coefficient (R 2 = 26%), because the total sample proved to be heterogeneous in terms of signal frequency within the dimension classes of animals. When the total sample was divided into two groups according to signal frequency, two statistically significant models (regression equations) were obtained in which signal frequency depended on the body size at high determination coefficients (R 2 = 73 and 94% versus 26% for the total sample). Thus, the problem of correlation between animal body size and the frequency of their vocal signals does not have a unique solution.

  14. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  15. Arctic Ocean Model Intercomparison Using Sound Speed

    Science.gov (United States)

    Dukhovskoy, D. S.; Johnson, M. A.

    2002-05-01

    The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.

  16. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  17. Acoustic signal analysis in the creeping discharge

    International Nuclear Information System (INIS)

    Nakamiya, T; Sonoda, Y; Tsuda, R; Ebihara, K; Ikegami, T

    2008-01-01

    We have previously succeeded in measuring the acoustic signal due to the dielectric barrier discharge and discriminating the dominant frequency components of the acoustic signal. The dominant frequency components appear over 20kHz of acoustic signal by the dielectric barrier discharge. Recently surface discharge control technology has been focused from practical applications such as ozonizer, NO X reactors, light source or display. The fundamental experiments are carried to examine the creeping discharge using the acoustic signal. When the high voltage (6kV, f = 10kHz) is applied to the electrode, the discharge current flows and the acoustic sound is generated. The current, voltage waveforms of creeping discharge and the sound signal detected by the condenser microphone are stored in the digital memory scope. In this scheme, Continuous Wavelet Transform (CWT) is applied to discriminate the acoustic sound of the micro discharge and the dominant frequency components are studied. CWT results of sound signal show the frequency spectrum of wideband up to 100kHz. In addition, the energy distributions of acoustic signal are examined by CWT

  18. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  19. The effect of frequency-specific sound signals on the germination of maize seeds.

    Science.gov (United States)

    Vicient, Carlos M

    2017-07-25

    The effects of sound treatments on the germination of maize seeds were determined. White noise and bass sounds (300 Hz) had a positive effect on the germination rate. Only 3 h treatment produced an increase of about 8%, and 5 h increased germination in about 10%. Fast-green staining shows that at least part of the effects of sound are due to a physical alteration in the integrity of the pericarp, increasing the porosity of the pericarp and facilitating oxygen availability and water and oxygen uptake. Accordingly, by removing the pericarp from the seeds the positive effect of the sound on the germination disappeared.

  20. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  1. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  2. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  3. Portable system for auscultation and lung sound analysis.

    Science.gov (United States)

    Nabiev, Rustam; Glazova, Anna; Olyinik, Valery; Makarenkova, Anastasiia; Makarenkov, Anatolii; Rakhimov, Abdulvosid; Felländer-Tsai, Li

    2014-01-01

    A portable system for auscultation and lung sound analysis has been developed, including the original electronic stethoscope coupled with mobile devices and special algorithms for the automated analysis of pulmonary sound signals. It's planned that the developed system will be used for monitoring of health status of patients with various pulmonary diseases.

  4. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  5. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  6. Dementias show differential physiological responses to salient sounds

    Science.gov (United States)

    Fletcher, Phillip D.; Nicholas, Jennifer M.; Shakespeare, Timothy J.; Downey, Laura E.; Golden, Hannah L.; Agustus, Jennifer L.; Clark, Camilla N.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (“looming”) or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases. PMID:25859194

  7. Active structural acoustic control for reduction of radiated sound from structure

    International Nuclear Information System (INIS)

    Hong, Jin Seok; Oh, Jae Eung

    2001-01-01

    Active control of sound radiation from a vibrating rectangular plate by a steady-state harmonic point force disturbance is experimentally studied. Structural excitation is achieved by two piezoceramic actuators mounted on the panel. Two accelerometers are implemented as error sensors. Estimated radiated sound signals using vibro-acoustic path transfer function are used as error signals. The vibro-acoustic path transfer function represents system between accelerometers and microphones. The approach is based on a multi-channel filtered-x LMS algorithm. The results shows that attenuation of sound levels of 11dB, 10dB is achieved

  8. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  9. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  10. A basic study on universal design of auditory signals in automobiles.

    Science.gov (United States)

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  11. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    reduction of bluff-body noise. Xiaoyu Wang and Xiaofeng Sun discuss the interaction of fan stator and acoustic treatments using the transfer element method. S Saito and his colleagues in JAXA report the development of active devices for reducing helicopter noise. The paper by A Tamura and M Tsutahara proposes a brand new methodology for aerodynamic sound by applying the lattice Boltzmann finite difference method. As the method solves the fluctuation of air density directly, it has the advantage of not requiring modeling of the sound generation. M A Langthjem and M Nakano solve the hole-tone feedback cycle in jet flow by a numerical method. Y Ogami and S Akishita propose the application of a line-vortex method to the three-dimensional separated flow from a bluff body. I hope that a second issue on aerodynamic sound will be published in FDR in the not too distant future.

  12. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    Science.gov (United States)

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  13. A system for heart sounds classification.

    Directory of Open Access Journals (Sweden)

    Grzegorz Redlarski

    Full Text Available The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases - one of the major causes of death around the globe - a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability.

  14. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  15. Sound generating flames of a gas turbine burner observed by laser-induced fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Hubschmid, W; Inauen, A.; Bombach, R.; Kreutner, W.; Schenker, S.; Zajadatz, M. [Alstom (Switzerland); Motz, C. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland); Paschereit, C.O. [Alstom (Switzerland)

    2002-03-01

    We performed 2-D OH LIF measurements to investigate the sound emission of a gas turbine combustor. The measured LIF signal was averaged over pulses at constant phase of the dominant acoustic oscillation. A periodic variation in intensity and position of the signal is observed and it is related to the measured sound intensity. (author)

  16. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  17. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  18. Radial Basis Function Networks for Conversion of Sound Spectra

    Directory of Open Access Journals (Sweden)

    Carlo Drioli

    2001-03-01

    Full Text Available In many advanced signal processing tasks, such as pitch shifting, voice conversion or sound synthesis, accurate spectral processing is required. Here, the use of Radial Basis Function Networks (RBFN is proposed for the modeling of the spectral changes (or conversions related to the control of important sound parameters, such as pitch or intensity. The identification of such conversion functions is based on a procedure which learns the shape of the conversion from few couples of target spectra from a data set. The generalization properties of RBFNs provides for interpolation with respect to the pitch range. In the construction of the training set, mel-cepstral encoding of the spectrum is used to catch the perceptually most relevant spectral changes. Moreover, a singular value decomposition (SVD approach is used to reduce the dimension of conversion functions. The RBFN conversion functions introduced are characterized by a perceptually-based fast training procedure, desirable interpolation properties and computational efficiency.

  19. A note on measurement of sound pressure with intensity probes

    DEFF Research Database (Denmark)

    Juhl, Peter; Jacobsen, Finn

    2004-01-01

    be improved under a variety of realistic sound field conditions by applying a different weighting of the two pressure signals from the probe. The improved intensity probe can measure the sound pressure more accurately at high frequencies than an ordinary sound intensity probe or an ordinary sound level meter......The effect of scattering and diffraction on measurement of sound pressure with "two-microphone" sound intensity probes is examined using an axisymmetric boundary element model of the probe. Whereas it has been shown a few years ago that the sound intensity estimated with a two-microphone probe...... is reliable up to 10 kHz when using 0.5 in. microphones in the usual face-to-face arrangement separated by a 12 mm spacer, the sound pressure measured with the same instrument will typically be underestimated at high frequencies. It is shown in this paper that the estimate of the sound pressure can...

  20. Potyviruses differ in their requirement for TOR signalling.

    Science.gov (United States)

    Ouibrahim, Laurence; Rubio, Ana Giner; Moretti, André; Montané, Marie-Hélène; Menand, Benoît; Meyer, Christian; Robaglia, Christophe; Caranta, Carole

    2015-09-01

    Potyviruses are important plant pathogens that rely on many plant cellular processes for successful infection. TOR (target of rapamycin) signalling is a key eukaryotic energy-signalling pathway controlling many cellular processes such as translation and autophagy. The dependence of potyviruses on active TOR signalling was examined. Arabidopsis lines downregulated for TOR by RNAi were challenged with the potyviruses watermelon mosaic virus (WMV) and turnip mosaic virus (TuMV). WMV accumulation was found to be severely altered while TuMV accumulation was only slightly delayed. In another approach, using AZD-8055, an active site inhibitor of the TOR kinase, WMV infection was found to be strongly affected. Moreover, AZD-8055 application can cure WMV infection. In contrast, TuMV infection was not affected by AZD-8055. This suggests that potyviruses have different cellular requirements for active plant TOR signalling.

  1. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  2. Requirement for nuclear calcium signaling in Drosophila long-term memory.

    Science.gov (United States)

    Weislogel, Jan-Marek; Bengtson, C Peter; Müller, Michaela K; Hörtzsch, Jan N; Bujard, Martina; Schuster, Christoph M; Bading, Hilmar

    2013-05-07

    Calcium is used throughout evolution as an intracellular signal transducer. In the mammalian central nervous system, calcium mediates the dialogue between the synapse and the nucleus that is required for transcription-dependent persistent neuronal adaptations. A role for nuclear calcium signaling in similar processes in the invertebrate brain has yet to be investigated. Here, we show by in vivo calcium imaging of adult brain neurons of the fruit fly Drosophila melanogaster, that electrical foot shocks used in olfactory avoidance conditioning evoked transient increases in cytosolic and nuclear calcium concentrations in neurons. These calcium signals were detected in Kenyon cells of the flies' mushroom bodies, which are sites of learning and memory related to smell. Acute blockade of nuclear calcium signaling during conditioning selectively and reversibly abolished the formation of long-term olfactory avoidance memory, whereas short-term, middle-term, or anesthesia-resistant olfactory memory remained unaffected. Thus, nuclear calcium signaling is required in flies for the progression of memories from labile to transcription-dependent long-lasting forms. These results identify nuclear calcium as an evolutionarily conserved signal needed in both invertebrate and vertebrate brains for transcription-dependent memory consolidation.

  3. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    scientists with that of numerical mathematicians studying sonification, psychologists, linguists, bioacousticians, and musicians to illuminate the structure of sound from different angles. Each of these disciplines deals with the use of sound to carry a different sort of information, under different requirements and constraints. By combining their insights, we can learn to understand of the structure of sound in general.

  4. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  5. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    Science.gov (United States)

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  6. Physiological phenotyping of dementias using emotional sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  7. Measuring the speed of sound in air using smartphone applications

    Science.gov (United States)

    Yavuz, A.

    2015-05-01

    This study presents a revised version of an old experiment available in many textbooks for measuring the speed of sound in air. A signal-generator application in a smartphone is used to produce the desired sound frequency. Nodes of sound waves in a glass pipe, of which one end is immersed in water, are more easily detected, so results can be obtained more quickly than from traditional acoustic experiments using tuning forks.

  8. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  9. Requirement of Dopamine Signaling in the Amygdala and Striatum for Learning and Maintenance of a Conditioned Avoidance Response

    Science.gov (United States)

    Darvas, Martin; Fadok, Jonathan P.; Palmiter, Richard D.

    2011-01-01

    Two-way active avoidance (2WAA) involves learning Pavlovian (association of a sound cue with a foot shock) and instrumental (shock avoidance) contingencies. To identify regions where dopamine (DA) is involved in mediating 2WAA, we restored DA signaling in specific brain areas of dopamine-deficient (DD) mice by local reactivation of conditionally…

  10. Wavelet-based ground vehicle recognition using acoustic signals

    Science.gov (United States)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  11. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  12. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    Science.gov (United States)

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  13. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  14. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  15. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  16. Binaural loudness for artificial-head measurements in directional sound fields

    DEFF Research Database (Denmark)

    Sivonen, Ville Pekka; Ellermeier, Wolfgang

    2008-01-01

    The effect of the sound incidence angle on loudness was investigated for fifteen listeners who matched the loudness of sounds coming from five different incidence angles in the horizontal plane to that of the same sound with frontal incidence. The stimuli were presented via binaural synthesis...... by using head-related transfer functions measured for an artificial head. The results, which exhibited marked individual differences, show that loudness depends on the direction from which a sound reaches the listener. The average results suggest a relatively simple rule for combining the two signals...... at the ears of an artificial head for binaural loudness predictions....

  17. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  18. Masking release by combined spatial and masker-fluctuation effects in the open sound field.

    Science.gov (United States)

    Middlebrooks, John C

    2017-12-01

    In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated. Comodulation masking release (CMR) was largely independent of signal location at a higher masker sound level, but at a lower level CMR was reduced for the lateral signal location. In Experiment 2, a brief signal was positioned at the envelope maximum (peak) or minimum (dip) of a 50-Hz-wide on-signal masker. Masking was released in dip more than in peak conditions only for the 90° signal. Overall, open-field SRM was greater in magnitude than binaural masking release reported in comparable closed-field studies, and envelope-related release was somewhat weaker. Mutual enhancement of masking release by spatial and envelope-related effects tended to increase with increasing masker level.

  19. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  20. Zero sound and quasiwave: separation in the magnetic field

    International Nuclear Information System (INIS)

    Bezuglyj, E.V.; Bojchuk, A.V.; Burma, N.G.; Fil', V.D.

    1995-01-01

    Theoretical and experimental results on the behavior of the longitudinal and transverse electron sound in a weak magnetic field are presented. It is shown theoretically that the effects of the magnetic field on zero sound velocity and ballistic transfer are opposite in sign and have sufficiently different dependences on the sample width, excitation frequency and relaxation time. This permits us to separate experimentally the Fermi-liquid and ballistic contributions in the electron sound signals. For the first time the ballistic transfer of the acoustic excitation by the quasiwave has been observed in zero magnetic field

  1. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  2. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  3. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  4. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  5. Surface return direction-of-arrival analysis for radar ice sounding surface clutter suppression

    DEFF Research Database (Denmark)

    Nielsen, Ulrik; Dall, Jørgen

    2015-01-01

    Airborne radar ice sounding is challenged by surface clutter masking the depth signal of interest. Surface clutter may even be prohibitive for potential space-based ice sounding radars. To some extent the radar antenna suppresses the surface clutter, and a multi-phase-center antenna in combination...... with coherent signal processing techniques can improve the suppression, in particular if the direction of arrival (DOA) of the clutter signal is estimated accurately. This paper deals with data-driven DOA estimation. By using P-band data from the ice shelf in Antarctica it is demonstrated that a varying...

  6. High frequency components of tracheal sound are emphasized during prolonged flow limitation

    International Nuclear Information System (INIS)

    Tenhunen, M; Huupponen, E; Saastamoinen, A; Kulkas, A; Himanen, S-L; Rauhala, E

    2009-01-01

    A nasal pressure transducer, which is used to study nocturnal airflow, also provides information about the inspiratory flow waveform. A round flow shape is presented during normal breathing. A flattened, non-round shape is found during hypopneas and it can also appear in prolonged episodes. The significance of this prolonged flow limitation is still not established. A tracheal sound spectrum has been analyzed further in order to achieve additional information about breathing during sleep. Increased sound frequencies over 500 Hz have been connected to obstruction of the upper airway. The aim of the present study was to examine the tracheal sound signal content of prolonged flow limitation and to find out whether prolonged flow limitation would consist of abundant high frequency activity. Sleep recordings of 36 consecutive patients were examined. The tracheal sound spectral analysis was performed on 10 min episodes of prolonged flow limitation, normal breathing and periodic apnea-hypopnea breathing. The highest total spectral amplitude, implicating loudest sounds, occurred during flow-limited breathing which also presented loudest sounds in all frequency bands above 100 Hz. In addition, the tracheal sound signal during flow-limited breathing constituted proportionally more high frequency activities compared to normal breathing and even periodic apnea-hypopnea breathing

  7. 33 CFR 83.36 - Signals to attract attention (Rule 36).

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Signals to attract attention... SECURITY INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.36 Signals to attract attention (Rule 36). If necessary to attract the attention of another vessel, any vessel may make light or sound...

  8. Exploring science with sound: sonification and the use of sonograms as data analysis tool

    CERN Multimedia

    CERN. Geneva; Williams, Genevieve

    2017-01-01

    Resonances, periodicity, patterns and spectra are well-known notions that play crucial roles in particle physics, and that have always been at the junction between sound/music analysis and scientific exploration. Detecting the shape of a particular energy spectrum, studying the stability of a particle beam in a synchrotron, and separating signals from a noisy background are just a few examples where the connection with sound can be very strong, all sharing the same concepts of oscillations, cycles and frequency. This seminar will focus on analysing data and their relations by translating measurements into audible signals and using the natural capability of the ear to distinguish, characterise and analyse waveform shapes, amplitudes and relations. This process is called data sonification, and one of the main tools to investigate the structure of the sound is the sonogram (sometimes also called a spectrogram). A sonogram is a visual representation of how the spectrum of a certain sound signal changes with time...

  9. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  10. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  11. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  12. A mathematical model for source separation of MMG signals recorded with a coupled microphone-accelerometer sensor pair.

    Science.gov (United States)

    Silva, Jorge; Chau, Tom

    2005-09-01

    Recent advances in sensor technology for muscle activity monitoring have resulted in the development of a coupled microphone-accelerometer sensor pair for physiological acousti signal recording. This sensor can be used to eliminate interfering sources in practical settings where the contamination of an acoustic signal by ambient noise confounds detection but cannot be easily removed [e.g., mechanomyography (MMG), swallowing sounds, respiration, and heart sounds]. This paper presents a mathematical model for the coupled microphone-accelerometer vibration sensor pair, specifically applied to muscle activity monitoring (i.e., MMG) and noise discrimination in externally powered prostheses for below-elbow amputees. While the model provides a simple and reliable source separation technique for MMG signals, it can also be easily adapted to other aplications where the recording of low-frequency (< 1 kHz) physiological vibration signals is required.

  13. 33 CFR 117.309 - Nassau Sound.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Nassau Sound. 117.309 Section 117.309 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.309 Nassau Sound. The draw of the Fernandina Port...

  14. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  15. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  16. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  17. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    Science.gov (United States)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  18. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  19. On the Perception of Speech Sounds as Biologically Significant Signals1,2

    Science.gov (United States)

    Pisoni, David B.

    2012-01-01

    This paper reviews some of the major evidence and arguments currently available to support the view that human speech perception may require the use of specialized neural mechanisms for perceptual analysis. Experiments using synthetically produced speech signals with adults are briefly summarized and extensions of these results to infants and other organisms are reviewed with an emphasis towards detailing those aspects of speech perception that may require some need for specialized species-specific processors. Finally, some comments on the role of early experience in perceptual development are provided as an attempt to identify promising areas of new research in speech perception. PMID:399200

  20. Sound signatures and production mechanisms of three species of pipefishes (Family: Syngnathidae

    Directory of Open Access Journals (Sweden)

    Adam Chee Ooi Lim

    2015-12-01

    Full Text Available Background. Syngnathid fishes produce three kinds of sounds, named click, growl and purr. These sounds are generated by different mechanisms to give a consistent signal pattern or signature which is believed to play a role in intraspecific and interspecific communication. Commonly known sounds are produced when the fish feeds (click, purr or is under duress (growl. While there are more acoustic studies on seahorses, pipefishes have not received much attention. Here we document the differences in feeding click signals between three species of pipefishes and relate them to cranial morphology and kinesis, or the sound-producing mechanism.Methods. The feeding clicks of two species of freshwater pipefishes, Doryichthys martensii and Doryichthys deokhathoides and one species of estuarine pipefish, Syngnathoides biaculeatus, were recorded by a hydrophone in acoustic dampened tanks. The acoustic signals were analysed using time-scale distribution (or scalogram based on wavelet transform. A detailed time-varying analysis of the spectral contents of the localized acoustic signal was obtained by jointly interpreting the oscillogram, scalogram and power spectrum. The heads of both Doryichthys species were prepared for microtomographical scans which were analysed using a 3D imaging software. Additionally, the cranial bones of all three species were examined using a clearing and double-staining method for histological studies.Results. The sound characteristics of the feeding click of the pipefish is species-specific, appearing to be dependent on three bones: the supraoccipital, 1st postcranial plate and 2nd postcranial plate. The sounds are generated when the head of the Dorichthyes pipefishes flexes backward during the feeding strike, as the supraoccipital slides backwards, striking and pushing the 1st postcranial plate against (and striking the 2nd postcranial plate. In the Syngnathoides pipefish, in the absence of the 1st postcranial plate, the

  1. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  2. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  3. Estimating the seismotelluric current required for observable electromagnetic ground signals

    Directory of Open Access Journals (Sweden)

    J. Bortnik

    2010-08-01

    Full Text Available We use a relatively simple model of an underground current source co-located with the earthquake hypocenter to estimate the magnitude of the seismotelluric current required to produce observable ground signatures. The Alum Rock earthquake of 31 October 2007, is used as an archetype of a typical California earthquake, and the effects of varying the ground conductivity and length of the current element are examined. Results show that for an observed 30 nT pulse at 1 Hz, the expected seismotelluric current magnitudes fall in the range ~10–100 kA. By setting the detectability threshold to 1 pT, we show that even when large values of ground conductivity are assumed, magnetic signals are readily detectable within a range of 30 km from the epicenter. When typical values of ground conductivity are assumed, the minimum current required to produce an observable signal within a 30 km range was found to be ~1 kA, which is a surprisingly low value. Furthermore, we show that deep nulls in the signal power develop in the non-cardinal directions relative to the orientation of the source current, indicating that a magnetometer station located in those regions may not observe a signal even though it is well within the detectable range. This result underscores the importance of using a network of magnetometers when searching for preseismic electromagnetic signals.

  4. The benefit of limb cloud imaging for infrared limb sounding of tropospheric trace gases

    OpenAIRE

    G. Heinemann; P. Preusse; R. Spang; S. Adams

    2009-01-01

    Advances in detector technology enable a new generation of infrared limb sounders to measure 2-D images of the atmosphere. A proposed limb cloud imager (LCI) mode will detect clouds with a spatial resolution unprecedented for limb sounding. For the inference of temperature and trace gas distributions, detector pixels of the LCI have to be combined into super-pixels which provide the required signal-to-noise and information content for the retrievals. This study examines the extent to which tr...

  5. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  6. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  7. Sound insulation between dwellings - Descriptors applied in building regulations in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2010-01-01

    Regulatory sound insulation requirements for dwellings have existed since the 1950s in some countries and descriptors for evaluation of sound insulation have existed for nearly as long. However, the descriptors have changed considerably over time, from simple arithmetic averaging of frequency bands...... was carried out of legal sound insulation requirements in 24 countries in Europe. The comparison of requirements for sound insulation between dwellings revealed significant differences in descriptors as well as levels. This paper focuses on descriptors and summarizes the history of descriptors, the problems...... of the present situation and the benefits of consensus concerning descriptors for airborne and impact sound insulation between dwellings. The descriptors suitable for evaluation should be well-defined under practical situations in buildings and be measurable. Measurement results should be reproducible...

  8. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  9. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....

  10. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  11. 12 CFR Appendix A to Part 1720 - Policy Guidance; Minimum Safety and Soundness Requirements

    Science.gov (United States)

    2010-01-01

    ..., DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SAFETY AND SOUNDNESS SAFETY AND SOUNDNESS Pt. 1720, App. A... effectively and to model the effect of differing interest rate scenarios on the Enterprise's financial... are implemented effectively, and that the Enterprise's organization structure and assignment of...

  12. Physically based sound synthesis and control of jumping sounds on an elastic trampoline

    DEFF Research Database (Denmark)

    Turchet, Luca; Pugliese, Roberto; Takala, Tapio

    2013-01-01

    This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine was contr......This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine...... was controlled in real-time by pro- cessing the signal captured by a contact microphone which was attached to the membrane of the trampoline in order to detect each jump. A user study was conducted to evaluate the quality of the in- teractive sonification. Results proved the success of the proposed algorithms...

  13. Artificial intelligence techniques used in respiratory sound analysis--a systematic review.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian

    2014-02-01

    Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.

  14. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  15. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  16. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Vespertilionid bats control the width of their biosonar sound beam dynamically during prey pursuit

    DEFF Research Database (Denmark)

    Jakobsen, Lasse; Surlykke, Annemarie

    2010-01-01

    Animals using sound for communication emit directional signals, focusing most acoustic energy in one direction. Echolocating bats are listening for soft echoes from insects. Therefore, a directional biosonar sound beam greatly increases detection probability in the forward direction and decreases...

  18. Climate Change and Requirement of Transfer of Environmentally Sound Technology

    DEFF Research Database (Denmark)

    Uddin, Mahatab

    that developed the technology, to another that adopts, adapts, and uses it. As different kinds of threats posed by climate change are continuously increasing all over the world the issue of “technology transfer” especially the transfer of environmentally sound technologies has become one of the key topics...

  19. Control of Sound Transmission with Active-Passive Tiles

    OpenAIRE

    Goldstein, Andre L.

    2006-01-01

    Nowadays, numerous applications of active sound transmission control require lightweight partitions with high transmission loss over a broad frequency range and simple control strategies. In this work an active-passive sound transmission control approach is investigated that potentially addresses these requirements. The approach involves the use of lightweight stiff panels, or tiles, attached to a radiating base structure through active-passive soft mounts and covering the structure surface. ...

  20. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  1. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  2. Airborne sound insulation of new composite wall structures

    Directory of Open Access Journals (Sweden)

    Ivanova Yonka

    2018-01-01

    Full Text Available Protection against noise is one of the essential requirements of the European Construction Product directive. In buildings, airborne sound insulation is used to define the acoustical quality between rooms. In order to develop wall structures with optimal sound insulation, an understanding of the physical origins of sound transmission is necessary. To develop a kind of knowledge that is applicable to the improvement of real walls and room barriers is the motive behind this study. The purpose of the work is to study the sound insulation of new composite wall structure.

  3. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  4. An experimental study on the sound and frequency of the Chinese ancient variable bell

    International Nuclear Information System (INIS)

    Chen Dongsheng; Hu Haining; Xing Lirong; Liu Yongsheng

    2009-01-01

    This paper describes an interesting sound phenomenon from a modern copy of the Chinese ancient variable bell which can emit distinctly different sounds at different temperatures. By means of audition-spectrum analyser software-and PC, the sound signals of the variable bell are collected and the fundamental spectra are shown on the PC. The configuration is simple and cheap, suitable for demonstration and laboratory exercises

  5. An experimental study on the sound and frequency of the Chinese ancient variable bell

    Energy Technology Data Exchange (ETDEWEB)

    Chen Dongsheng; Hu Haining; Xing Lirong; Liu Yongsheng [Department of Maths and Physics, Shanghai University of Electric Power, 200090 Shanghai (China)], E-mail: cds781@hotmail.com

    2009-05-15

    This paper describes an interesting sound phenomenon from a modern copy of the Chinese ancient variable bell which can emit distinctly different sounds at different temperatures. By means of audition-spectrum analyser software-and PC, the sound signals of the variable bell are collected and the fundamental spectra are shown on the PC. The configuration is simple and cheap, suitable for demonstration and laboratory exercises.

  6. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  7. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  8. Sensory illusions: Common mistakes in physics regarding sound, light and radio waves

    Science.gov (United States)

    Briles, T. M.; Tabor-Morris, A. E.

    2013-03-01

    Optical illusions are well known as effects that we see that are not representative of reality. Sensory illusions are similar but can involve other senses than sight, such as hearing or touch. One mistake commonly noted among instructors is that students often mis-identify radio signals as sound waves and not as part of the electromagnetic spectrum. A survey of physics students from multiple high schools highlights the frequency of this common misconception, as well as other nuances on this misunderstanding. Many students appear to conclude that, since they experience radio broadcasts as sound, then sound waves are the actual transmission of radio signals and not, as is actually true, a representation of those waves as produced by the translator box, the radio. Steps to help students identify and correct sensory illusion misconceptions are discussed. School of Education

  9. Stromal Indian hedgehog signaling is required for intestinal adenoma formation in mice

    NARCIS (Netherlands)

    Büller, Nikè V J A; Rosekrans, Sanne L.; Metcalfe, Ciara; Heijmans, Jarom; Van Dop, Willemijn A.; Fessler, Evelyn; Jansen, Marnix; Ahn, Christina; Vermeulen, Jacqueline L M; Westendorp, B. Florien; Robanus-Maandag, Els C.; Offerhaus, G. Johan; Medema, Jan Paul; D'Haens, Geert R A M; Wildenberg, Manon E.; De Sauvage, Frederic J.; Muncan, Vanesa; Van Den Brink, Gijs R.

    2015-01-01

    BACKGROUND & AIMS: Indian hedgehog (IHH) is an epithelial-derived signal in the intestinal stroma, inducing factors that restrict epithelial proliferation and suppress activation of the immune system. In addition to these rapid effects of IHH signaling, IHH is required to maintain a stromal

  10. SNMP is a signaling component required for pheromone sensitivity in Drosophila.

    Science.gov (United States)

    Jin, Xin; Ha, Tal Soo; Smith, Dean P

    2008-08-05

    The only known volatile pheromone in Drosophila, 11-cis-vaccenyl acetate (cVA), mediates a variety of behaviors including aggregation, mate recognition, and sexual behavior. cVA is detected by a small set of olfactory neurons located in T1 trichoid sensilla on the antennae of males and females. Two components known to be required for cVA reception are the odorant receptor Or67d and the extracellular pheromone-binding protein LUSH. Using a genetic screen for cVA-insensitive mutants, we have identified a third component required for cVA reception: sensory neuron membrane protein (SNMP). SNMP is a homolog of CD36, a scavenger receptor important for lipoprotein binding and uptake of cholesterol and lipids in vertebrates. In humans, loss of CD36 is linked to a wide range of disorders including insulin resistance, dyslipidemia, and atherosclerosis, but how CD36 functions in lipid transport and signal transduction is poorly understood. We show that SNMP is required in pheromone-sensitive neurons for cVA sensitivity but is not required for sensitivity to general odorants. Using antiserum to SNMP infused directly into the sensillum lymph, we show that SNMP function is required on the dendrites of cVA-sensitive neurons; this finding is consistent with a direct role in cVA signal transduction. Therefore, pheromone perception in Drosophila should serve as an excellent model to elucidate the role of CD36 members in transmembrane signaling.

  11. Alternative Paths to Hearing (A Conjecture. Photonic and Tactile Hearing Systems Displaying the Frequency Spectrum of Sound

    Directory of Open Access Journals (Sweden)

    E. H. Hara

    2006-01-01

    Full Text Available In this article, the hearing process is considered from a system engineering perspective. For those with total hearing loss, a cochlear implant is the only direct remedy. It first acts as a spectrum analyser and then electronically stimulates the neurons in the cochlea with a number of electrodes. Each electrode carries information on the separate frequency bands (i.e., spectrum of the original sound signal. The neurons then relay the signals in a parallel manner to the section of the brain where sound signals are processed. Photonic and tactile hearing systems displaying the spectrum of sound are proposed as alternative paths to the section of the brain that processes sound. In view of the plasticity of the brain, which can rewire itself, the following conjectures are offered. After a certain period of training, a person without the ability to hear should be able to decipher the patterns of photonic or tactile displays of the sound spectrum and learn to ‘hear’. This is very similar to the case of a blind person learning to ‘read’ by recognizing the patterns created by the series of bumps as their fingers scan the Braille writing. The conjectures are yet to be tested. Designs of photonic and tactile systems displaying the sound spectrum are outlined.

  12. Search for fourth sound propagation in supersolid 4He

    International Nuclear Information System (INIS)

    Aoki, Y.; Kojima, H.; Lin, X.

    2008-01-01

    A systematic study is carried out to search for fourth sound propagation solid 4 He samples below 500 mK down to 40 mK between 25 and 56 bar using the techniques of heat pulse generator and titanium superconducting transition edge bolometer. If solid 4 He is endowed with superfluidity below 200 mK, as indicated by recent torsional oscillator experiments, theories predict fourth sound propagation in such a supersolid state. If found, fourth sound would provide convincing evidence for superfluidity and a new tool for studying the new phase. The search for a fourth sound-like mode is based on the response of the bolometers to heat pulses traveling through cylindrical samples of solids grown with different crystal qualities. Bolometers with increasing sensitivity are constructed. The heater generator amplitude is reduced to the sensitivity limit to search for any critical velocity effects. The fourth sound velocity is expected to vary as ∞ √ Ρ s /ρ. Searches for a signature in the bolometer response with such a characteristic temperature dependence are made. The measured response signal has not so far revealed any signature of a new propagating mode within a temperature excursion of 5 μK from the background signal shape. Possible reasons for this negative result are discussed. Prior to the fourth sound search, the temperature dependence of heat pulse propagation was studied as it transformed from 'second sound' in the normal solid 4 He to transverse ballistic phonon propagation. Our work extends the studies of [V. Narayanamurti and R. C. Dynes, Phys. Rev. B 12, 1731 (1975)] to higher pressures and to lower temperatures. The measured transverse ballistic phonon propagation velocity is found to remain constant (within the 0.3% scatter of the data) below 100 mK at all pressures and reveals no indication of an onset of supersolidity. The overall dynamic thermal response of solid to heat input is found to depend strongly on the sample preparation procedure

  13. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  14. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  15. RASS sound speed profile (SSP) measurements for use in outdoor sound propagation models

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, S G [Physics Department, University of Auckland (New Zealand); Huenerbein, S v; Waddington, D [Research Institute for the Built and Human Environment, University of Salford (United Kingdom)], E-mail: s.vonhunerbein@salford.ac.uk

    2008-05-01

    The performance of outdoor sound propagation models depends to a great extent on meteorological input parameters. In an effort to improve speed and accuracy, model output synthetic sound speed profiles (SSP) are commonly used depending on meteorological classification schemes. In order to use SSP measured by RASS in outdoor sound propagation models, the complex profiles need to be simplified. In this paper we extend an investigation on the spatial and temporal characteristics of the meteorological data set required to yield adequate comparisons between models and field measurements, so that the models can be fairly judged. Vertical SSP from RASS, SODAR wind profiles as well as mast wind and temperature data from a flat terrain site and measured over a period of several months are used to evaluate applicability of the logarithmic approximation for a stability classification scheme proposed by the HARMONOISE working group.

  16. RASS sound speed profile (SSP) measurements for use in outdoor sound propagation models

    International Nuclear Information System (INIS)

    Bradley, S G; Huenerbein, S v; Waddington, D

    2008-01-01

    The performance of outdoor sound propagation models depends to a great extent on meteorological input parameters. In an effort to improve speed and accuracy, model output synthetic sound speed profiles (SSP) are commonly used depending on meteorological classification schemes. In order to use SSP measured by RASS in outdoor sound propagation models, the complex profiles need to be simplified. In this paper we extend an investigation on the spatial and temporal characteristics of the meteorological data set required to yield adequate comparisons between models and field measurements, so that the models can be fairly judged. Vertical SSP from RASS, SODAR wind profiles as well as mast wind and temperature data from a flat terrain site and measured over a period of several months are used to evaluate applicability of the logarithmic approximation for a stability classification scheme proposed by the HARMONOISE working group

  17. Wnt Signaling Is Required for Long-Term Memory Formation

    Directory of Open Access Journals (Sweden)

    Ying Tan

    2013-09-01

    Full Text Available Wnt signaling regulates synaptic plasticity and neurogenesis in the adult nervous system, suggesting a potential role in behavioral processes. Here, we probed the requirement for Wnt signaling during olfactory memory formation in Drosophila using an inducible RNAi approach. Interfering with β-catenin expression in adult mushroom body neurons specifically impaired long-term memory (LTM without altering short-term memory. The impairment was reversible, being rescued by expression of a wild-type β-catenin transgene, and correlated with disruption of a cellular LTM trace. Inhibition of wingless, a Wnt ligand, and arrow, a Wnt coreceptor, also impaired LTM. Wingless expression in wild-type flies was transiently elevated in the brain after LTM conditioning. Thus, inhibiting three key components of the Wnt signaling pathway in adult mushroom bodies impairs LTM, indicating that this pathway mechanistically underlies this specific form of memory.

  18. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  19. Stromal Indian hedgehog signaling is required for intestinal adenoma formation in mice

    NARCIS (Netherlands)

    Büller, Nikè V. J. A.; Rosekrans, Sanne L.; Metcalfe, Ciara; Heijmans, Jarom; van Dop, Willemijn A.; Fessler, Evelyn; Jansen, Marnix; Ahn, Christina; Vermeulen, Jacqueline L. M.; Westendorp, B. Florien; Robanus-Maandag, Els C.; Offerhaus, G. Johan; Medema, Jan Paul; D'Haens, Geert R. A. M.; Wildenberg, Manon E.; de Sauvage, Frederic J.; Muncan, Vanesa; van den Brink, Gijs R.

    2015-01-01

    Indian hedgehog (IHH) is an epithelial-derived signal in the intestinal stroma, inducing factors that restrict epithelial proliferation and suppress activation of the immune system. In addition to these rapid effects of IHH signaling, IHH is required to maintain a stromal phenotype in which

  20. Objective Scaling of Sound Quality for Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    ) Subjective sound quality ratings of clean and distorted speech and music signals, by normal-hearing and hearing-impaired listeners, to provide reference data, 2) An auditory model of the ear, including the effects of hearing loss, based on existing psychoacoustic knowledge, coupled to 3) An artificial neural......A new method for the objective estimation of sound quality for both normal-hearing and hearing-impaired listeners has been presented: OSSQAR (Objective Scaling of Sound Quality and Reproduction). OSSQAR is based on three main parts, which have been carried out and documented separately: 1...... network, which was trained to predict the sound quality ratings. OSSQAR predicts the perceived sound quality on two independent perceptual rating scales: Clearness and Sharpness. These two scales were shown to be the most relevant for assessment of sound quality, and they were interpreted the same way...

  1. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  2. A Coincidental Sound Track for "Time Flies"

    Science.gov (United States)

    Cardany, Audrey Berger

    2014-01-01

    Sound tracks serve a valuable purpose in film and video by helping tell a story, create a mood, and signal coming events. Holst's "Mars" from "The Planets" yields a coincidental soundtrack to Eric Rohmann's Caldecott-winning book, "Time Flies." This pairing provides opportunities for upper elementary and…

  3. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  5. Heart sounds: are you listening? Part 1.

    Science.gov (United States)

    Reimer-Kent, Jocelyn

    2013-01-01

    All nurses should have an understanding of heart sounds and be proficient in cardiac auscultation. Unfortunately, this skill is not part of many nursing school curricula, nor is it necessarily a required skillfor employment. Yet, being able to listen and accurately describe heart sounds has tangible benefits to the patient, as it is an integral part of a complete cardiac assessment. In this two-part article, I will review the fundamentals of cardiac auscultation, how cardiac anatomy and physiology relate to heart sounds, and describe the various heart sounds. Whether you are a beginner or a seasoned nurse, it is never too early or too late to add this important diagnostic skill to your assessment tool kit.

  6. Signal quality measures for unsupervised blood pressure measurement

    International Nuclear Information System (INIS)

    Abdul Sukor, J; Redmond, S J; Lovell, N H; Chan, G S H

    2012-01-01

    Accurate systolic and diastolic pressure estimation, using automated blood pressure measurement, is difficult to achieve when the transduced signals are contaminated with noise or interference, such as movement artifact. This study presents an algorithm for automated signal quality assessment in blood pressure measurement by determining the feasibility of accurately detecting systolic and diastolic pressures when corrupted with various levels of movement artifact. The performance of the proposed algorithm is compared to a manually annotated reference scoring (RS). Based on visual representations and audible playback of Korotkoff sounds, the creation of the RS involved two experts identifying sections of the recorded sounds and annotating sections of noise contamination. The experts determined the systolic and diastolic pressure in 100 recorded Korotkoff sound recordings, using a simultaneous electrocardiograph as a reference signal. The recorded Korotkoff sounds were acquired from 25 healthy subjects (16 men and 9 women) with a total of four measurements per subject. Two of these measurements contained purposely induced noise artifact caused by subject movement. Morphological changes in the cuff pressure signal and the width of the Korotkoff pulse were extracted features which were believed to be correlated with the noise presence in the recorded Korotkoff sounds. Verification of reliable Korotkoff pulses was also performed using extracted features from the oscillometric waveform as recorded from the inflatable cuff. The time between an identified noise section and a verified Korotkoff pulse was the key feature used to determine the validity of possible systolic and diastolic pressures in noise contaminated Korotkoff sounds. The performance of the algorithm was assessed based on the ability to: verify if a signal was contaminated with any noise; the accuracy, sensitivity and specificity of this noise classification, and the systolic and diastolic pressure

  7. Detecting the temporal structure of sound sequences in newborn infants

    NARCIS (Netherlands)

    Háden, G.P.; Honing, H.; Török, M.; Winkler, I.

    2015-01-01

    Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the

  8. Deformation of a sound field caused by a manikin

    DEFF Research Database (Denmark)

    Weinrich, Søren G.

    1981-01-01

    around the head at distances of 1 cm to 2 m, measured from the tip of the nose. The signals were pure tones at 1, 2, 4, 6, 8, and 10 kHz. It was found that the presence of the manikin caused changes in the SPL of the sound field of at most ±2.5 dB at a distance of 1 m from the surface of the manikin....... Only over an interval of approximately 20 ° behind the manikin (i.e., opposite the sound source) did the manikin cause much larger changes, up to 9 dB. These changes are caused by destructive interference between sounds coming from opposite sides of the manikin. In front of the manikin, the changes...

  9. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  10. Sounding rockets explore the ionosphere

    International Nuclear Information System (INIS)

    Mendillo, M.

    1990-01-01

    It is suggested that small, expendable, solid-fuel rockets used to explore ionospheric plasma can offer insight into all the processes and complexities common to space plasma. NASA's sounding rocket program for ionospheric research focuses on the flight of instruments to measure parameters governing the natural state of the ionosphere. Parameters include input functions, such as photons, particles, and composition of the neutral atmosphere; resultant structures, such as electron and ion densities, temperatures and drifts; and emerging signals such as photons and electric and magnetic fields. Systematic study of the aurora is also conducted by these rockets, allowing sampling at relatively high spatial and temporal rates as well as investigation of parameters, such as energetic particle fluxes, not accessible to ground based systems. Recent active experiments in the ionosphere are discussed, and future sounding rocket missions are cited

  11. Effect of echolocation behavior-related constant frequency-frequency modulation sound on the frequency tuning of inferior collicular neurons in Hipposideros armiger.

    Science.gov (United States)

    Tang, Jia; Fu, Zi-Ying; Wei, Chen-Xue; Chen, Qi-Cai

    2015-08-01

    In constant frequency-frequency modulation (CF-FM) bats, the CF-FM echolocation signals include both CF and FM components, yet the role of such complex acoustic signals in frequency resolution by bats remains unknown. Using CF and CF-FM echolocation signals as acoustic stimuli, the responses of inferior collicular (IC) neurons of Hipposideros armiger were obtained by extracellular recordings. We tested the effect of preceding CF or CF-FM sounds on the shape of the frequency tuning curves (FTCs) of IC neurons. Results showed that both CF-FM and CF sounds reduced the number of FTCs with tailed lower-frequency-side of IC neurons. However, more IC neurons experienced such conversion after adding CF-FM sound compared with CF sound. We also found that the Q 20 value of the FTC of IC neurons experienced the largest increase with the addition of CF-FM sound. Moreover, only CF-FM sound could cause an increase in the slope of the neurons' FTCs, and such increase occurred mainly in the lower-frequency edge. These results suggested that CF-FM sound could increase the accuracy of frequency analysis of echo and cut-off low-frequency elements from the habitat of bats more than CF sound.

  12. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Science.gov (United States)

    Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M

    2012-01-01

    Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial

  13. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Directory of Open Access Journals (Sweden)

    Jesse A Port

    Full Text Available Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp., γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used

  14. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  15. Full-Band Quasi-Harmonic Analysis and Synthesis of Musical Instrument Sounds with Adaptive Sinusoids

    Directory of Open Access Journals (Sweden)

    Marcelo Caetano

    2016-05-01

    Full Text Available Sinusoids are widely used to represent the oscillatory modes of musical instrument sounds in both analysis and synthesis. However, musical instrument sounds feature transients and instrumental noise that are poorly modeled with quasi-stationary sinusoids, requiring spectral decomposition and further dedicated modeling. In this work, we propose a full-band representation that fits sinusoids across the entire spectrum. We use the extended adaptive Quasi-Harmonic Model (eaQHM to iteratively estimate amplitude- and frequency-modulated (AM–FM sinusoids able to capture challenging features such as sharp attacks, transients, and instrumental noise. We use the signal-to-reconstruction-error ratio (SRER as the objective measure for the analysis and synthesis of 89 musical instrument sounds from different instrumental families. We compare against quasi-stationary sinusoids and exponentially damped sinusoids. First, we show that the SRER increases with adaptation in eaQHM. Then, we show that full-band modeling with eaQHM captures partials at the higher frequency end of the spectrum that are neglected by spectral decomposition. Finally, we demonstrate that a frame size equal to three periods of the fundamental frequency results in the highest SRER with AM–FM sinusoids from eaQHM. A listening test confirmed that the musical instrument sounds resynthesized from full-band analysis with eaQHM are virtually perceptually indistinguishable from the original recordings.

  16. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  17. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  18. Heart sounds analysis using probability assessment

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

    2017-01-01

    Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

  19. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    Science.gov (United States)

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  20. Concepts for evaluation of sound insulation of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  1. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  2. Tinnitus retraining therapy for patients with tinnitus and decreased sound tolerance.

    Science.gov (United States)

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2003-04-01

    Our experience has revealed the following: (1) TRT is applicable for all types of tinnitus, as well as for decreased sound tolerance, with significant improvement of tinnitus occurring in over 80% of the cases, and at least equal success rate for decreased sound tolerance. (2) TRT can provide cure for decreased sound tolerance. (3) TRT does not require frequent clinic visits and has no side effects; however, (4) Special training of health providers involved in this treatment is required for this treatment to be effective.

  3. Visualizing Sound Directivity via Smartphone Sensors

    Science.gov (United States)

    Hawley, Scott H.; McClain, Robert E.

    2018-02-01

    When Yang-Hann Kim received the Rossing Prize in Acoustics Education at the 2015 meeting of the Acoustical Society of America, he stressed the importance of offering visual depictions of sound fields when teaching acoustics. Often visualization methods require specialized equipment such as microphone arrays or scanning apparatus. We present a simple method for visualizing angular dependence in sound fields, made possible via the confluence of sensors available via a new smartphone app that the authors have developed.

  4. Coherent Surface Clutter Suppression Techniques with Topography Estimation for Multi-Phase-Center Radar Ice Sounding

    DEFF Research Database (Denmark)

    Nielsen, Ulrik; Dall, Jørgen; Kristensen, Steen Savstrup

    2012-01-01

    Radar ice sounding enables measurement of the thickness and internal structures of the large ice sheets on Earth. Surface clutter masking the signal of interest is a major obstacle in ice sounding. Algorithms for surface clutter suppression based on multi-phase-center radars are presented. These ...

  5. From acoustic descriptors to evoked quality of car door sounds.

    Science.gov (United States)

    Bezat, Marie-Céline; Kronland-Martinet, Richard; Roussarie, Vincent; Ystad, Sølvi

    2014-07-01

    This article describes the first part of a study aiming at adapting the mechanical car door construction to the drivers' expectancies in terms of perceived quality of cars deduced from car door sounds. A perceptual cartography of car door sounds is obtained from various listening tests aiming at revealing both ecological and analytical properties linked to evoked car quality. In the first test naive listeners performed absolute evaluations of five ecological properties (i.e., solidity, quality, weight, closure energy, and success of closure). Then experts in the area of automobile doors categorized the sounds according to organic constituents (lock, joints, door panel), in particular whether or not the lock mechanism could be perceived. Further, a sensory panel of naive listeners identified sensory descriptors such as classical descriptors or onomatopoeia that characterize the sounds, hereby providing an analytic description of the sounds. Finally, acoustic descriptors were calculated after decomposition of the signal into a lock and a closure component by the Empirical Mode Decomposition (EMD) method. A statistical relationship between the acoustic descriptors and the perceptual evaluations of the car door sounds could then be obtained through linear regression analysis.

  6. [A focused sound field measurement system by LabVIEW].

    Science.gov (United States)

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  7. Phonocardiography Signal Processing

    CERN Document Server

    Abbas, Abbas K

    2009-01-01

    The auscultation method is an important diagnostic indicator for hemodynamic anomalies. Heart sound classification and analysis play an important role in the auscultative diagnosis. The term phonocardiography refers to the tracing technique of heart sounds and the recording of cardiac acoustics vibration by means of a microphone-transducer. Therefore, understanding the nature and source of this signal is important to give us a tendency for developing a competent tool for further analysis and processing, in order to enhance and optimize cardiac clinical diagnostic approach. This book gives the

  8. Meaning From Environmental Sounds: Types of Signal-Referent Relations and Their Effect on Recognizing Auditory Icons

    Science.gov (United States)

    Keller, Peter; Stevens, Catherine

    2004-01-01

    This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…

  9. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  10. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  11. Car audio using DSP for active sound control. DSP ni yoru active seigyo wo mochiita audio

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, K.; Asano, S.; Furukawa, N. (Mitsubishi Motor Corp., Tokyo (Japan))

    1993-06-01

    In the automobile cabin, there are some unique problems which spoil the quality of sound reproduction from audio equipment, such as the narrow space and/or the background noise. The audio signal processing by using DSP (digital signal processor) makes enable a solution to these problems. A car audio with a high amenity has been successfully made by the active sound control using DSP. The DSP consists of an adder, coefficient multiplier, delay unit, and connections. For the actual processing by DSP, are used functions, such as sound field correction, response and processing of noises during driving, surround reproduction, graphic equalizer processing, etc. High effectiveness of the method was confirmed through the actual driving evaluation test. The present paper describes the actual method of sound control technology using DSP. Especially, the dynamic processing of the noise during driving is discussed in detail. 1 ref., 12 figs., 1 tab.

  12. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    Science.gov (United States)

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  13. Sound level measurements using smartphone "apps": Useful or inaccurate?

    Directory of Open Access Journals (Sweden)

    Daniel R Nast

    2014-01-01

    Full Text Available Many recreational activities are accompanied by loud concurrent sounds and decisions regarding the hearing hazards associated with these activities depend on accurate sound measurements. Sound level meters (SLMs are designed for this purpose, but these are technical instruments that are not typically available in recreational settings and require training to use properly. Mobile technology has made such sound level measurements more feasible for even inexperienced users. Here, we assessed the accuracy of sound level measurements made using five mobile phone applications or "apps" on an Apple iPhone 4S, one of the most widely used mobile phones. Accuracy was assessed by comparing application-based measurements to measurements made using a calibrated SLM. Whereas most apps erred by reporting higher sound levels, one application measured levels within 5 dB of a calibrated SLM across all frequencies tested.

  14. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  15. Development of Optophone with No Diaphragm and Application to Sound Measurement in Jet Flow

    Directory of Open Access Journals (Sweden)

    Yoshito Sonoda

    2012-01-01

    Full Text Available The optophone with no diaphragm, which can detect sound waves without disturbing flow of air and sound field, is presented as a novel sound measurement technique and the present status of development is reviewed in this paper. The method is principally based on the Fourier optics and the sound signal is obtained by detecting ultrasmall diffraction light generated from phase modulation by sounds. The principle and theory, which have been originally developed as a plasma diagnostic technique to measure electron density fluctuations in the nuclear fusion research, are briefly introduced. Based on the theoretical analysis, property and merits as a wave-optical sound detection are presented, and the fundamental experiments and results obtained so far are reviewed. It is shown that sounds from about 100 Hz to 100 kHz can be simultaneously detected by a visible laser beam, and the method is very useful to sound measurement in aeroacoustics. Finally, present main problems of the optophone for practical uses in sound and/or noise measurements and the image of technology expected in the future are shortly shown.

  16. Vectorial signalling mechanism required for cell-cell communication during sporulation in Bacillus subtilis.

    Science.gov (United States)

    Diez, Veronica; Schujman, Gustavo E; Gueiros-Filho, Frederico J; de Mendoza, Diego

    2012-01-01

    Spore formation in Bacillus subtilis takes place in a sporangium consisting of two chambers, the forespore and the mother cell, which are linked by pathways of cell-cell communication. One pathway, which couples the proteolytic activation of the mother cell transcription factor σ(E) to the action of a forespore synthesized signal molecule, SpoIIR, has remained enigmatic. Signalling by SpoIIR requires the protein to be exported to the intermembrane space between forespore and mother cell, where it will interact with and activate the integral membrane protease SpoIIGA. Here we show that SpoIIR signal activity as well as the cleavage of its N-terminal extension is strictly dependent on the prespore fatty acid biosynthetic machinery. We also report that a conserved threonine residue (T27) in SpoIIR is required for processing, suggesting that signalling of SpoIIR is dependent on fatty acid synthesis probably because of acylation of T27. In addition, SpoIIR localization in the forespore septal membrane depends on the presence of SpoIIGA. The orchestration of σ(E) activation in the intercellular space by an acylated signal protein provides a new paradigm to ensure local transmission of a weak signal across the bilayer to control cell-cell communication during development. © 2011 Blackwell Publishing Ltd.

  17. Visualization of the hot chocolate sound effect by spectrograms

    Science.gov (United States)

    Trávníček, Z.; Fedorchenko, A. I.; Pavelka, M.; Hrubý, J.

    2012-12-01

    We present an experimental and a theoretical analysis of the hot chocolate effect. The sound effect is evaluated using time-frequency signal processing, resulting in a quantitative visualization by spectrograms. This method allows us to capture the whole phenomenon, namely to quantify the dynamics of the rising pitch. A general form of the time dependence volume fraction of the bubbles is proposed. We show that the effect occurs due to the nonlinear dependence of the speed of sound in the gas/liquid mixture on the volume fraction of the bubbles and the nonlinear time dependence of the volume fraction of the bubbles.

  18. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  19. Tracheal sound parameters of respiratory cycle phases show differences between flow-limited and normal breathing during sleep

    International Nuclear Information System (INIS)

    Kulkas, A; Huupponen, E; Virkkala, J; Saastamoinen, A; Rauhala, E; Tenhunen, M; Himanen, S-L

    2010-01-01

    The objective of the present work was to develop new computational parameters to examine the characteristics of respiratory cycle phases from the tracheal breathing sound signal during sleep. Tracheal sound data from 14 patients (10 males and 4 females) were examined. From each patient, a 10 min long section of normal and a 10 min section of flow-limited breathing during sleep were analysed. The computationally determined proportional durations of the respiratory phases were first investigated. Moreover, the phase durations and breathing sound amplitude levels were used to calculate the area under the breathing sound envelope signal during inspiration and expiration phases. An inspiratory sound index was then developed to provide the percentage of this type of area during the inspiratory phase with respect to the combined area of inspiratory and expiratory phases. The proportional duration of the inspiratory phase showed statistically significantly higher values during flow-limited breathing than during normal breathing and inspiratory pause displayed an opposite difference. The inspiratory sound index showed statistically significantly higher values during flow-limited breathing than during normal breathing. The presented novel computational parameters could contribute to the examination of sleep-disordered breathing or as a screening tool

  20. Acoustic quality and sound insulation between dwellings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    1998-01-01

    to another, however, several of the results show a slope around 4 % per dB. The results may be used to evaluate the acoustic quality level of a certain set of sound insulation requirements, or they may be used as a basis for specifying the desired acoustic quality of future buildings......During the years there have been several large field investigations in different countries with the aim to find a relationship between sound insulation between dwellings and the subjective degree of annoyance. This paper presents an overview of the results, and the difficulties in comparing...... the different findings are discussed. It is tried to establish dose-response relationships between airborne sound insulation or impact sound pressure level according to ISO 717 and the percentage of people being annoyed by noise from neighbours. The slopes of the dose-response curves vary from one investigation...

  1. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  2. Modulation of apical constriction by Wnt signaling is required for lung epithelial shape transition.

    Science.gov (United States)

    Fumoto, Katsumi; Takigawa-Imamura, Hisako; Sumiyama, Kenta; Kaneiwa, Tomoyuki; Kikuchi, Akira

    2017-01-01

    In lung development, the apically constricted columnar epithelium forms numerous buds during the pseudoglandular stage. Subsequently, these epithelial cells change shape into the flat or cuboidal pneumocytes that form the air sacs during the canalicular and saccular (canalicular-saccular) stages, yet the impact of cell shape on tissue morphogenesis remains unclear. Here, we show that the expression of Wnt components is decreased in the canalicular-saccular stages, and that genetically constitutive activation of Wnt signaling impairs air sac formation by inducing apical constriction in the epithelium as seen in the pseudoglandular stage. Organ culture models also demonstrate that Wnt signaling induces apical constriction through apical actomyosin cytoskeletal organization. Mathematical modeling reveals that apical constriction induces bud formation and that loss of apical constriction is required for the formation of an air sac-like structure. We identify MAP/microtubule affinity-regulating kinase 1 (Mark1) as a downstream molecule of Wnt signaling and show that it is required for apical cytoskeletal organization and bud formation. These results suggest that Wnt signaling is required for bud formation by inducing apical constriction during the pseudoglandular stage, whereas loss of Wnt signaling is necessary for air sac formation in the canalicular-saccular stages. © 2017. Published by The Company of Biologists Ltd.

  3. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    Science.gov (United States)

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  4. A Generalized Model for Indoor Location Estimation Using Environmental Sound from Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2018-02-01

    Full Text Available The indoor location of individuals is a key contextual variable for commercial and assisted location-based services and applications. Commercial centers and medical buildings (e.g., hospitals require location information of their users/patients to offer the services that are needed at the correct moment. Several approaches have been proposed to tackle this problem. In this paper, we present the development of an indoor location system which relies on the human activity recognition approach, using sound as an information source to infer the indoor location based on the contextual information of the activity that is realized at the moment. In this work, we analyze the sound information to estimate the location using the contextual information of the activity. A feature extraction approach to the sound signal is performed to feed a random forest algorithm in order to generate a model to estimate the location of the user. We evaluate the quality of the resulting model in terms of sensitivity and specificity for each location, and we also perform out-of-bag error estimation. Our experiments were carried out in five representative residential homes. Each home had four individual indoor rooms. Eleven activities (brewing coffee, cooking, eggs, taking a shower, etc. were performed to provide the contextual information. Experimental results show that developing an indoor location system (ILS that uses contextual information from human activities (identified with data provided from the environmental sound can achieve an estimation that is 95% correct.

  5. Layer specific and general requirements for ERK/MAPK signaling in the developing neocortex

    Science.gov (United States)

    Xing, Lei; Larsen, Rylan S; Bjorklund, George Reed; Li, Xiaoyan; Wu, Yaohong; Philpot, Benjamin D; Snider, William D; Newbern, Jason M

    2016-01-01

    Aberrant signaling through the Raf/MEK/ERK (ERK/MAPK) pathway causes pathology in a family of neurodevelopmental disorders known as 'RASopathies' and is implicated in autism pathogenesis. Here, we have determined the functions of ERK/MAPK signaling in developing neocortical excitatory neurons. Our data reveal a critical requirement for ERK/MAPK signaling in the morphological development and survival of large Ctip2+ neurons in layer 5. Loss of Map2k1/2 (Mek1/2) led to deficits in corticospinal tract formation and subsequent corticospinal neuron apoptosis. ERK/MAPK hyperactivation also led to reduced corticospinal axon elongation, but was associated with enhanced arborization. ERK/MAPK signaling was dispensable for axonal outgrowth of layer 2/3 callosal neurons. However, Map2k1/2 deletion led to reduced expression of Arc and enhanced intrinsic excitability in both layers 2/3 and 5, in addition to imbalanced synaptic excitation and inhibition. These data demonstrate selective requirements for ERK/MAPK signaling in layer 5 circuit development and general effects on cortical pyramidal neuron excitability. DOI: http://dx.doi.org/10.7554/eLife.11123.001 PMID:26848828

  6. 46 CFR 32.15-10 - Sounding machines-T/OCL.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Sounding machines-T/OCL. 32.15-10 Section 32.15-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS SPECIAL EQUIPMENT, MACHINERY, AND HULL REQUIREMENTS Navigation Equipment § 32.15-10 Sounding machines—T/OCL. All mechanically propelled vessels in...

  7. Equivalent threshold sound pressure levels for acoustic test signals of short duration

    DEFF Research Database (Denmark)

    Poulsen, Torben; Daugaard, Carsten

    1998-01-01

    . The measurements were performed with two types of headphones, Telephonics TDH-39 and Sennheiser HDA-200. The sound pressure levels were measured in an IEC 318 ear simulator with Type 1 adapter (a flat plate) and a conical ring. The audiometric methods used in the experiments were the ascending method (ISO 8253...

  8. Experiments on the use of sound as a fish deterrent

    International Nuclear Information System (INIS)

    Turnpenny, A.W.H.; Thatcher, K.P.; Wood, R.; Loeffelman, P.H.

    1993-01-01

    This report describes a series of experimental studies into the potential use of acoustic stimuli to deter fish from water intakes at thermal and hydroelectric power stations. The aim was to enlarge the range of candidate signals for testing, and to apply these in more rigorous laboratory trials and to a wider range of estuarine and marine fish species than was possible in previous initial preliminary studies. The trials were also required to investigate the degree to which fish might become habituated to the sound signals, consequently reducing their effectiveness. The species of fish which were of interest in this study were the Atlantic salmon (Salmo salar), sea trout (Salmo trutta), the shads (Alosa fallax, A. alosa), the European eel (Anguilla anguilla), bass (Dicentrarchus labrax), herring (Clupea harengus), whiting (Merlangius merlangus) and cod (Gadus morhua). All of these species are considered to be of conservation and/or commercial importance in Britain today and are potentially vulnerable to capture by nuclear, fossil-fuelled and tidal generating stations. Based on the effectiveness of the signals observed in these trials, a properly developed and sited acoustic fish deterrent system is expected to reduce fish impingement significantly at water intakes. Field trials at an estuarine power station are recommended. (author)

  9. Effect of Listening to the Al-Quran on Heart Sound

    Science.gov (United States)

    Daud, N. F.; Sharif, Z.

    2018-03-01

    This paper investigates the effect on the heart sounds upon listening to the chosen verses of the Al Quran. A signal of the heart sounds is extracted using Thinklabs Phonocardiography software and then the frequency components are extracted using MATLAB 7.11.0. Frequency components during diastolic are compared for two sessions; before and during listening sessions. Diastolic is a period where the chamber of the heart is filled with the blood when the heart muscle is in a relaxed condition. From this study, it is found that the frequency of the heart sound during listening to Al-Quran is lower than the one before listening to Al-Quran. This indicates that, the state of calmness can be achieved by listening to this selected verses of the Al-Quran.

  10. Acoustic quality and sound insulation between dwellings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    1999-01-01

    to another, however, several of the results show a slope around 4 % per dB. The results may be used to evaluate the acoustic quality level of a certain set of sound insulation requirements, or they may be used as a basis for specifying the desired acoustic quality of future buildings.......During the years there have been several large field investigations in different countries with the aim to find a relationship between sound insulation between dwellings and the subjective degree of annoyance. This paper presents an overview of the results, and the dif-ficulties in comparing...... the different findings are discussed. It is tried to establish dose-response relationships between airborne sound insulation or impact sound pressure level according to ISO 717 and the percentage of people being annoyed by noise from neighbours. The slopes of the dose-response curves vary from one investigation...

  11. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  12. Device for precision measurement of speed of sound in a gas

    Science.gov (United States)

    Kelner, Eric; Minachi, Ali; Owen, Thomas E.; Burzynski, Jr., Marion; Petullo, Steven P.

    2004-11-30

    A sensor for measuring the speed of sound in a gas. The sensor has a helical coil, through which the gas flows before entering an inner chamber. Flow through the coil brings the gas into thermal equilibrium with the test chamber body. After the gas enters the chamber, a transducer produces an ultrasonic pulse, which is reflected from each of two faces of a target. The time difference between the two reflected signals is used to determine the speed of sound in the gas.

  13. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  15. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    Science.gov (United States)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  16. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  17. Effects of small variations of speed of sound in optoacoustic tomographic imaging

    International Nuclear Information System (INIS)

    Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel

    2014-01-01

    Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtained with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media

  18. Non-stationarity of resonance signals from magnetospheric and ionospheric plasmas

    International Nuclear Information System (INIS)

    Higel, Bernard

    1975-01-01

    Rocket observations of resonance signals from ionospheric plasma were made during EIDI relaxation sounding experiments. It appeared that their amplitude, phase, and frequency characteristics are not stationary as a function of the receipt time. The measurement of these nonstationary signals increases the interest presented by resonance phenomena in spatial plasma diagnostics, but this measurement is not easy for frequency non-stationarities. A new method, entirely numerical, is proposed for automatic recognition of these signals. It will be used for the selecting and real-time processing of signals of the same type to be observed during relaxation sounding experiments on board of the futur GEOS satellite. In this method a statistical discrimination is done on the values taken by several parameters associated with the non-stationarities of the observed resonance signals [fr

  19. Wnt signaling requires retromer-dependent recycling of MIG-14/Wntless in Wnt-producing cells.

    NARCIS (Netherlands)

    Yang, P.T.; Lorenowicz, M.J.; Silhankova, M.; Coudreuse, D.Y.M.; Betist, M.C.; Korswagen, H.C.

    2008-01-01

    Wnt proteins are secreted signaling molecules that play a central role in development and adult tissue homeostasis. We have previously shown that Wnt signaling requires retromer function in Wnt-producing cells. The retromer is a multiprotein complex that mediates endosome-to-Golgi transport of

  20. Wing, tail, and vocal contributions to the complex acoustic signals of courting Calliope hummingbirds

    Directory of Open Access Journals (Sweden)

    Christopher James CLARK

    2011-04-01

    Full Text Available Multi-component signals contain multiple signal parts expressed in the same physical modality. One way to identify individual components is if they are produced by different physical mechanisms. Here, I studied the mechanisms generating acoustic signals in the courtship displays of the Calliope hummingbird Stellula calliope. Display dives consisted of three synchronized sound elements, a high-frequency tone (hft, a low frequency tone (lft, and atonal sound pulses (asp, which were then followed by a frequency-modulated fall. Manipulating any of the rectrices (tail-feathers of wild males impaired production of the lft and asp but not the hft or fall, which are apparently vocal. I tested the sound production capabilities of the rectrices in a wind tunnel. Single rectrices could generate the lft but not the asp, whereas multiple rectrices tested together produced sounds similar to the asp when they fluttered and collided with their neighbors percussively, representing a previously unknown mechanism of sound production. During the shuttle display, a trill is generated by the wings during pulses in which the wingbeat frequency is elevated to 95 Hz, 40% higher than the typical hovering wingbeat frequency. The Calliope hummingbird courtship displays include sounds produced by three independent mechanisms, and thus include a minimum of three acoustic signal components. These acoustic mechanisms have different constraints and thus potentially contain different messages. Producing multiple acoustic signals via multiple mechanisms may be a way to escape the constraints present in any single mechanism [Current Zoology 57 (2: 187–196, 2011].

  1. Directivity of Spherical Polyhedron Sound Source Used in Near-Field HRTF Measurements

    International Nuclear Information System (INIS)

    Yu Guang-Zheng; Xie Bo-Sun; Rao Dan

    2010-01-01

    The omnidirectional character is one of important requirements for the sound source used in near-field head-related transfer function (HRTF) measurements. Based on the analysis on the radiation sound pressure and directivity character of various spherical polyhedron sound sources, a spherical dodecahedral sound source with radius of 0.035m is proposed and manufactured. Theoretical and measured results indicate that the sound source is approximately omnidirectional below the frequency of 8 kHz. In addition, the sound source has reasonable magnitude response from 350Hz to 20kHz and linear phase characteristics. Therefore, it is suitable for the near-field HRTF measurements. (fundamental areas of phenomenology(including applications))

  2. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  3. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  4. 46 CFR 28.875 - Radar, depth sounding, and auto-pilot.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Radar, depth sounding, and auto-pilot. 28.875 Section 28.875 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY UNINSPECTED VESSELS REQUIREMENTS FOR COMMERCIAL FISHING INDUSTRY VESSELS Aleutian Trade Act Vessels § 28.875 Radar, depth sounding, and auto-pilot...

  5. Statistical Signal Processing by Using the Higher-Order Correlation between Sound and Vibration and Its Application to Fault Detection of Rotational Machine

    Directory of Open Access Journals (Sweden)

    Hisako Masuike

    2008-01-01

    Full Text Available In this study, a stochastic diagnosis method based on the changing information of not only a linear correlation but also a higher-order nonlinear correlation is proposed in a form suitable for online signal processing in time domain by using a personal computer, especially in order to find minutely the mutual relationship between sound and vibration emitted from rotational machines. More specifically, a conditional probability hierarchically reflecting various types of correlation information is theoretically derived by introducing an expression on the multidimensional probability distribution in orthogonal expansion series form. The effectiveness of the proposed theory is experimentally confirmed by applying it to the observed data emitted from a rotational machine driven by an electric motor.

  6. The Drosophila rolled locus encodes a MAP kinase required in the sevenless signal transduction pathway.

    OpenAIRE

    Biggs, W H; Zavitz, K H; Dickson, B; van der Straten, A; Brunner, D; Hafen, E; Zipursky, S L

    1994-01-01

    Mitogen-activated protein (MAP) kinases have been proposed to play a critical role in receptor tyrosine kinase (RTK)-mediated signal transduction pathways. Although genetic and biochemical studies of RTK pathways in Caenorhabditis elegans, Drosophila melanogaster and mammals have revealed remarkable similarities, a genetic requirement for MAP kinases in RTK signaling has not been established. During retinal development in Drosophila, the sevenless (Sev) RTK is required for development of the ...

  7. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  8. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  9. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  10. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  11. Insulin signaling is acutely required for long-term memory in Drosophila.

    Science.gov (United States)

    Chambers, Daniel B; Androschuk, Alaura; Rosenfelt, Cory; Langer, Steven; Harding, Mark; Bolduc, Francois V

    2015-01-01

    Memory formation has been shown recently to be dependent on energy status in Drosophila. A well-established energy sensor is the insulin signaling (InS) pathway. Previous studies in various animal models including human have revealed the role of insulin levels in short-term memory but its role in long-term memory remains less clear. We therefore investigated genetically the spatial and temporal role of InS using the olfactory learning and long-term memory model in Drosophila. We found that InS is involved in both learning and memory. InS in the mushroom body is required for learning and long-term memory whereas long-term memory specifically is impaired after InS signaling disruption in the ellipsoid body, where it regulates the level of p70s6k, a downstream target of InS and a marker of protein synthesis. Finally, we show also that InS is acutely required for long-term memory formation in adult flies.

  12. Ultra-thin smart acoustic metasurface for low-frequency sound insulation

    Science.gov (United States)

    Zhang, Hao; Xiao, Yong; Wen, Jihong; Yu, Dianlong; Wen, Xisen

    2016-04-01

    Insulating low-frequency sound is a conventional challenge due to the high areal mass required by mass law. In this letter, we propose a smart acoustic metasurface consisting of an ultra-thin aluminum foil bonded with piezoelectric resonators. Numerical and experimental results show that the metasurface can break the conventional mass law of sound insulation by 30 dB in the low frequency regime (sound insulation performance is attributed to the infinite effective dynamic mass density produced by the smart resonators. It is also demonstrated that the excellent sound insulation property can be conveniently tuned by simply adjusting the external circuits instead of modifying the structure of the metasurface.

  13. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  14. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  15. A stethoscope with wavelet separation of cardiac and respiratory sounds for real time telemedicine implemented on field-programmable gate array

    Science.gov (United States)

    Castro, Víctor M.; Muñoz, Nestor A.; Salazar, Antonio J.

    2015-01-01

    Auscultation is one of the most utilized physical examination procedures for listening to lung, heart and intestinal sounds during routine consults and emergencies. Heart and lung sounds overlap in the thorax. An algorithm was used to separate them based on the discrete wavelet transform with multi-resolution analysis, which decomposes the signal into approximations and details. The algorithm was implemented in software and in hardware to achieve real-time signal separation. The heart signal was found in detail eight and the lung signal in approximation six. The hardware was used to separate the signals with a delay of 256 ms. Sending wavelet decomposition data - instead of the separated full signa - allows telemedicine applications to function in real time over low-bandwidth communication channels.

  16. Emission of sound from the mammalian inner ear

    Science.gov (United States)

    Reichenbach, Tobias; Stefanovic, Aleksandra; Nin, Fumiaki; Hudspeth, A. J.

    2013-03-01

    The mammalian inner ear, or cochlea, not only acts as a detector of sound but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the mechanical active process that sensitizes the cochlea and sharpens its frequency discrimination. It remains uncertain how these signals propagate back to the middle ear, from which they are emitted as sound. Although reverse propagation might occur through waves on the cochlear basilar membrane, experiments suggest the existence of a second component in otoacoustic emissions. We have combined theoretical and experimental studies to show that mechanical signals can also be transmitted by waves on Reissner's membrane, a second elastic structure within the cochea. We have developed a theoretical description of wave propagation on the parallel Reissner's and basilar membranes and its role in the emission of distortion products. By scanning laser interferometry we have measured traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emission. T. R. holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund; A. J. H. is an Investigator of Howard Hughes Medical Institute.

  17. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  18. MO-FG-BRA-02: A Feasibility Study of Integrating Breathing Audio Signal with Surface Surrogates for Respiratory Motion Management

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Y; Zhu, X; Zheng, D; Li, S; Ma, R; Zhang, M; Fan, Q; Wang, X; Verma, V; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States)

    2016-06-15

    Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallow and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.

  19. MO-FG-BRA-02: A Feasibility Study of Integrating Breathing Audio Signal with Surface Surrogates for Respiratory Motion Management

    International Nuclear Information System (INIS)

    Lei, Y; Zhu, X; Zheng, D; Li, S; Ma, R; Zhang, M; Fan, Q; Wang, X; Verma, V; Zhou, S; Tang, X

    2016-01-01

    Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallow and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.

  20. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  1. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  2. Determination of the mechanical thermostat electrical contacts switching quality with sound and vibration analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rejc, Jure; Munih, Marko [University of Ljubljana, Ljubljana (Slovenia)

    2017-05-15

    A mechanical thermostat is a device that switches heating or cooling appliances on or off based on temperature. For this kind of use, electronic or mechanical switching concepts are applied. During the production of electrical contacts, several irregularities can occur leading to improper switching events of the thermostat electrical contacts. This paper presents a non-obstructive method based on the fact that when the switching event occurs it can be heard and felt by human senses. We performed several laboratory tests with two different methods. The first method includes thermostat switch sound signal analysis during the switching event. The second method is based on sampling of the accelerometer signal during the switching event. The results show that the sound analysis approach has great potential. The approach enables an accurate determination of the switching event even if the sampled signal carries also the switching event of the neighbour thermostat.

  3. Behavioral responses by Icelandic White-Beaked Dolphins (Lagenorhynchus albirostris) to playback sounds

    DEFF Research Database (Denmark)

    Rasmussen, Marianne H.; Atem, Ana; Miller, Lee A.

    2016-01-01

    AbstractThe aim of this study was to investigate how wild white-beaked dolphins (Lagenorhynchus albirostris)respond to the playback of novel, anthropogenic sounds. We used amplitude-modulated tones and synthetic pulse-bursts. (Some authors in the literature use the term “burst pulse” meaning a bu...... a response and a change in the natural behavior of a marine mammal—in this case, wild white-beaked dolphins........ The estimated received levels for tonal signals were from 110 to 160 dB and for pulse-bursts were 153 to 166 dB re 1 μPa (peak-to-peak). Playback of a file with no signal served as a no sound control in all experiments. The animals responded to all acoustic signals with nine different behavioral responses: (1......) circling the array, (2) turning around and approaching the camera, (3)underwater tail slapping, (4)emitting bubbles, (5)turning their belly towards the set-up, (6) emitting pulse-bursts towards the loudspeaker, (7) an increase in swim speed, (8) a change in swim direction, and (9) jumping. A total of 157...

  4. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  5. Sound classification of dwellings in the Nordic countries

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Turunen-Rise, Iiris

    1997-01-01

    be met. The classification system is based on limit values for airborne sound insulation, impact sound pressure level, reverberation time and indoor and outdoor noise levels. The purpose of the standard is to offer a tool for specification of a standardised acoustic climate and to promote constructors......A draft standard INSTA 122:1997 on sound classification of dwellings is for voting as a common national standard in the Nordic countries (Denmark, Norway, Sweden, Finland, Iceland) and in Estonia. The draft standard specifies a sound classification system with four classes A, B, C and D, where...... class C is proposed as the future minimum requirements for new dwellings. The classes B and A define criteria for dwellings with improved or very good acoustic conditions, whereas class D may be used for older, renovated dwellings in which the acoustic quality level of a new dwelling cannot reasonably...

  6. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  7. Contemporary methods for realization and estimation of efficiency of 3Daudio technology application for sound interface improvement of an aircraft cabin

    Directory of Open Access Journals (Sweden)

    O. N. Korsun

    2014-01-01

    Full Text Available High information load of crew is one of the main problems of modern piloted aircraft therefore researches on approving data representation form, especially in critical situations are a challenge. The article considers one of opportunities to improve the interface of a modern pilot's cabin i.e. to use a spatial sound (3D - audio technology. The 3D - audio is a technology, which recreates a spatially directed sound in earphones or via loudspeakers. Spatial audio-helps, which together with information on danger will specify also the direction from which it proceeds, can reduce time of response to an event and, therefore, increase situational safety of flight. It is supposed that helps will be provided through pilot's headset therefore technology realization via earphones is discussed.Now the main hypothesis explaining the human ability to recognize the position of a sound source in space, asserts that the human estimates distortion of a sound signal spectrum at interaction with the head and an auricle depending on an arrangement of the sound source. For exact describing the signal spectrum variations there are such concepts as Head Related Impulse Response (HRIR and Head Related Transfer Function (HRTF. HRIR is measured in humans or dummies. At present the most full-scale public HRIR library is CIPIC HRTF Database of CIPIC Interface Laboratory at UC Davis.To have 3D audio effect, it is necessary to simulate a mono-signal conversion through the linear digital filters with anthropodependent pulse characteristics (HRIR for the left and right ear, which correspond to the chosen direction. Results should be united in a stereo file and applied for reproduction to the earphones.This scheme was realized in Matlab, and the received software was used for experiments to estimate the quantitative characteristics of technology. For processing and subsequent experiments the following sound signals were chosen: a fragment of the classical music piece "Polovetsky

  8. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  9. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  10. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  11. Hearing Loss Signals Need for Diagnosis

    Science.gov (United States)

    ... Products For Consumers Home For Consumers Consumer Updates Hearing Loss Signals Need for Diagnosis Share Tweet Linkedin ... you’re talking loudly? Thinking about ordering a hearing aid or sound amplifier from a magazine or ...

  12. Metrics for Polyphonic Sound Event Detection

    Directory of Open Access Journals (Sweden)

    Annamaria Mesaros

    2016-05-01

    Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

  13. A study on the sound quality evaluation model of mechanical air-cleaners

    DEFF Research Database (Denmark)

    Ih, Jeong-Guon; Jang, Su-Won; Jeong, Cheol-Ho

    2009-01-01

    In operating the air-cleaner for a long time, people in a quiet enclosed space expect low sound at low operational levels for a routine cleaning of air. However, in the condition of high operational levels of the cleaner, a powerful yet nonannoying sound is desired, which is connected to a feeling...... of an immediate cleaning of pollutants. In this context, it is important to evaluate and design the air-cleaner noise to satisfy such contradictory expectations from the customers. In this study, a model for evaluating the sound quality of air-cleaners of mechanical type was developed based on objective...... and subjective analyses. Sound signals from various aircleaners were recorded and they were edited by increasing or decreasing the loudness at three wide specific-loudness bands: 20-400 Hz (0-3.8 barks), 400-1250 Hz (3.8-10 barks), and 1.25- 12.5 kHz bands (10-22.8 barks). Subjective tests using the edited...

  14. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  15. Zebrafish con/disp1 reveals multiple spatiotemporal requirements for Hedgehog-signaling in craniofacial development

    Directory of Open Access Journals (Sweden)

    Schwend Tyler

    2009-11-01

    Full Text Available Abstract Background The vertebrate head skeleton is derived largely from cranial neural crest cells (CNCC. Genetic studies in zebrafish and mice have established that the Hedgehog (Hh-signaling pathway plays a critical role in craniofacial development, partly due to the pathway's role in CNCC development. Disruption of the Hh-signaling pathway in humans can lead to the spectral disorder of Holoprosencephaly (HPE, which is often characterized by a variety of craniofacial defects including midline facial clefting and cyclopia 12. Previous work has uncovered a role for Hh-signaling in zebrafish dorsal neurocranium patterning and chondrogenesis, however Hh-signaling mutants have not been described with respect to the ventral pharyngeal arch (PA skeleton. Lipid-modified Hh-ligands require the transmembrane-spanning receptor Dispatched 1 (Disp1 for proper secretion from Hh-synthesizing cells to the extracellular field where they act on target cells. Here we study chameleon mutants, lacking a functional disp1(con/disp1. Results con/disp1 mutants display reduced and dysmorphic mandibular and hyoid arch cartilages and lack all ceratobranchial cartilage elements. CNCC specification and migration into the PA primorida occurs normally in con/disp1 mutants, however disp1 is necessary for post-migratory CNCC patterning and differentiation. We show that disp1 is required for post-migratory CNCC to become properly patterned within the first arch, while the gene is dispensable for CNCC condensation and patterning in more posterior arches. Upon residing in well-formed pharyngeal epithelium, neural crest condensations in the posterior PA fail to maintain expression of two transcription factors essential for chondrogenesis, sox9a and dlx2a, yet continue to robustly express other neural crest markers. Histology reveals that posterior arch residing-CNCC differentiate into fibrous-connective tissue, rather than becoming chondrocytes. Treatments with Cyclopamine, to

  16. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  17. Velocity of sound in, and adiabatic compressibility of, Molten LiF-NaF, LiF-KF, NaF-KF mixtures

    International Nuclear Information System (INIS)

    Minchenko, V.I.; Konovalov, Y.V.; Smirnov, M.V.

    1986-01-01

    The authors measured the velocity of sound as a function of temperature at 1.5 zHM frequency in LiF-NaF, NaF-KF, LiF-KF melts over the entire range of their compositions. The measurements were made by comparison of the phases of a reference pulse signal and a signal reflected from the bottom of the crucible. The specified temperatures were maintained constant within plus or minus 1 degree. The sound conductor consisted of a cylindrical rod of sintered beryllium oxide, which does not interact with test melts. The study shows that the velocity of sound decreases linearly with increase of the temperature. The values of the constants of the empirical equations are presented in a table, with indication of the temperature range. The dependence of the velocity of sound on composition of the melts is shown, where isotherms for 1250 K are given as an example. Variation of the composition by 1-2 mole % leads to increase or decrease of the velocity of sound by 5-10 m

  18. Detecting interferences with iOS applications to measure speed of sound

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-01-01

    Traditional experiments measuring the speed of sound consist of studying harmonics by changing the length of a glass tube closed at one end. In these experiments, the sound source and observer are outside of the tube. In this paper, we propose the modification of this old experiment by studying destructive interference in a pipe using a headset, iPhone and iPad. The iPhone is used as an emitter with signal generator application and the iPad is used as the receiver with a spectrogram application. Two experiments are carried out for measures: the emitter inside of the tube with the receiver outside, and vice versa. We conclude that it is even possible to adequately and easily measure the speed of sound using a cup or a can of coke with the method described in this paper.

  19. Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders

    Science.gov (United States)

    Gillam, Sandra Laing; Ford, Mikenzi Bentley

    2012-01-01

    The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…

  20. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  1. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  2. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  3. Domain requirements for the Dock adapter protein in growth- cone signaling.

    Science.gov (United States)

    Rao, Y; Zipursky, S L

    1998-03-03

    Tyrosine phosphorylation has been implicated in growth-cone guidance through genetic, biochemical, and pharmacological studies. Adapter proteins containing src homology 2 (SH2) domains and src homology 3 (SH3) domains provide a means of linking guidance signaling through phosphotyrosine to downstream effectors regulating growth-cone motility. The Drosophila adapter, Dreadlocks (Dock), the homolog of mammalian Nck containing three N-terminal SH3 domains and a single SH2 domain, is highly specialized for growth-cone guidance. In this paper, we demonstrate that Dock can couple signals in either an SH2-dependent or an SH2-independent fashion in photoreceptor (R cell) growth cones, and that Dock displays different domain requirements in different neurons.

  4. Analysis of Damped Mass-Spring Systems for Sound Synthesis

    Directory of Open Access Journals (Sweden)

    Don Morgan

    2009-01-01

    Full Text Available There are many ways of synthesizing sound on a computer. The method that we consider, called a mass-spring system, synthesizes sound by simulating the vibrations of a network of interconnected masses, springs, and dampers. Numerical methods are required to approximate the differential equation of a mass-spring system. The standard numerical method used in implementing mass-spring systems for use in sound synthesis is the symplectic Euler method. Implementers and users of mass-spring systems should be aware of the limitations of the numerical methods used; in particular we are interested in the stability and accuracy of the numerical methods used. We present an analysis of the symplectic Euler method that shows the conditions under which the method is stable and the accuracy of the decay rates and frequencies of the sounds produced.

  5. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    Energy Technology Data Exchange (ETDEWEB)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G., E-mail: akosovichev@solar.stanford.edu [Stanford University, HEPL, Stanford, CA 94305 (United States)

    2014-04-10

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  6. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    International Nuclear Information System (INIS)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G.

    2014-01-01

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  7. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  8. FGF signaling is required for brain left-right asymmetry and brain midline formation.

    Science.gov (United States)

    Neugebauer, Judith M; Yost, H Joseph

    2014-02-01

    Early disruption of FGF signaling alters left-right (LR) asymmetry throughout the embryo. Here we uncover a role for FGF signaling that specifically disrupts brain asymmetry, independent of normal lateral plate mesoderm (LPM) asymmetry. When FGF signaling is inhibited during mid-somitogenesis, asymmetrically expressed LPM markers southpaw and lefty2 are not affected. However, asymmetrically expressed brain markers lefty1 and cyclops become bilateral. We show that FGF signaling controls expression of six3b and six7, two transcription factors required for repression of asymmetric lefty1 in the brain. We found that Z0-1, atypical PKC (aPKC) and β-catenin protein distribution revealed a midline structure in the forebrain that is dependent on a balance of FGF signaling. Ectopic activation of FGF signaling leads to overexpression of six3b, loss of organized midline adherins junctions and bilateral loss of lefty1 expression. Reducing FGF signaling leads to a reduction in six3b and six7 expression, an increase in cell boundary formation in the brain midline, and bilateral expression of lefty1. Together, these results suggest a novel role for FGF signaling in the brain to control LR asymmetry, six transcription factor expressions, and a midline barrier structure. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  10. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  11. Sound production to electric discharge: sonic muscle evolution in progress in Synodontis spp. catfishes (Mochokidae).

    Science.gov (United States)

    Boyle, Kelly S; Colleye, Orphal; Parmentier, Eric

    2014-09-22

    Elucidating the origins of complex biological structures has been one of the major challenges of evolutionary studies. Within vertebrates, the capacity to produce regular coordinated electric organ discharges (EODs) has evolved independently in different fish lineages. Intermediate stages, however, are not known. We show that, within a single catfish genus, some species are able to produce sounds, electric discharges or both signals (though not simultaneously). We highlight that both acoustic and electric communication result from actions of the same muscle. In parallel to their abilities, the studied species show different degrees of myofibril development in the sonic and electric muscle. The lowest myofibril density was observed in Synodontis nigriventris, which produced EODs but no swim bladder sounds, whereas the greatest myofibril density was observed in Synodontis grandiops, the species that produced the longest sound trains but did not emit EODs. Additionally, S. grandiops exhibited the lowest auditory thresholds. Swim bladder sounds were similar among species, while EODs were distinctive at the species level. We hypothesize that communication with conspecifics favoured the development of species-specific EOD signals and suggest an evolutionary explanation for the transition from a fast sonic muscle to electrocytes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  13. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  14. CD25 and CD69 induction by α4β1 outside-in signalling requires TCR early signalling complex proteins

    Science.gov (United States)

    Cimo, Ann-Marie; Ahmed, Zamal; McIntyre, Bradley W.; Lewis, Dorothy E.; Ladbury, John E.

    2013-01-01

    Distinct signalling pathways producing diverse cellular outcomes can utilize similar subsets of proteins. For example, proteins from the TCR (T-cell receptor) ESC (early signalling complex) are also involved in interferon-α receptor signalling. Defining the mechanism for how these proteins function within a given pathway is important in understanding the integration and communication of signalling networks with one another. We investigated the contributions of the TCR ESC proteins Lck (lymphocyte-specific kinase), ZAP-70 (ζ-chain-associated protein of 70 kDa), Vav1, SLP-76 [SH2 (Src homology 2)-domain-containing leukocyte protein of 76 kDa] and LAT (linker for activation of T-cells) to integrin outside-in signalling in human T-cells. Lck, ZAP-70, SLP-76, Vav1 and LAT were activated by α4β1 outside-in signalling, but in a manner different from TCR signalling. TCR stimulation recruits ESC proteins to activate the mitogen-activated protein kinase ERK (extracellular-signal-regulated kinase). α4β1 outside-in-mediated ERK activation did not require TCR ESC proteins. However, α4β1 outside-in signalling induced CD25 and co-stimulated CD69 and this was dependent on TCR ESC proteins. TCR and α4β1 outside-in signalling are integrated through the common use of TCR ESC proteins; however, these proteins display functionally distinct roles in these pathways. These novel insights into the cross-talk between integrin outside-in and TCR signalling pathways are highly relevant to the development of therapeutic strategies to overcome disease associated with T-cell deregulation. PMID:23758320

  15. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  16. Visualizing Sound Directivity via Smartphone Sensors

    Science.gov (United States)

    Hawley, Scott H.; McClain, Robert E., Jr.

    2018-01-01

    When Yang-Hann Kim received the Rossing Prize in Acoustics Education at the 2015 meeting of the Acoustical Society of America, he stressed the importance of offering visual depictions of sound fields when teaching acoustics. Often visualization methods require specialized equipment such as microphone arrays or scanning apparatus. We present a…

  17. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  18. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  19. 46 CFR 119.445 - Fill and sounding pipes for fuel tanks.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Fill and sounding pipes for fuel tanks. 119.445 Section... INSTALLATION Specific Machinery Requirements § 119.445 Fill and sounding pipes for fuel tanks. (a) Fill pipes for fuel tanks must be not less than 40 millimeters (1.5 inches) nominal pipe size. (b) There must be...

  20. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  1. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.

    Science.gov (United States)

    Kirchberger, Martin; Russo, Frank A

    2016-02-01

    A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.

  2. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  3. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  4. Reduction of noise in the neonatal intensive care unit using sound-activated noise meters.

    Science.gov (United States)

    Wang, D; Aubertin, C; Barrowman, N; Moreau, K; Dunn, S; Harrold, J

    2014-11-01

    To determine if sound-activated noise meters providing direct audit and visual feedback can reduce sound levels in a level 3 neonatal intensive care unit (NICU). Sound levels (in dB) were compared between a 2-month period with noise meters present but without visual signal fluctuation and a subsequent 2 months with the noise meters providing direct audit and visual feedback. There was a significant increase in the percentage of time the sound level in the NICU was below 50 dB across all patient care areas (9.9%, 8.9% and 7.3%). This improvement was not observed in the desk area where there are no admitted patients. There was no change in the percentage of time the NICU was below 45 or 55 dB. Sound-activated noise meters seem effective in reducing sound levels in patient care areas. Conversations may have moved to non-patient care areas preventing a similar change there. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Are occlusal characteristics, headache, parafunctional habits and clicking sounds associated with the signs and symptoms of temporomandibular disorder in adolescents?

    Science.gov (United States)

    Lauriti, Leandro; Motta, Lara Jansiski; Silva, Paula Fernanda da Costa; Leal de Godoy, Camila Haddad; Alfaya, Thays Almeida; Fernandes, Kristianne Porta Santos; Mesquita-Ferrari, Raquel Agnelli; Bussadori, Sandra Kalil

    2013-10-01

    [Purpose] To assess the association between the oclusal characteristics, headache, parafunctional habits and clicking sounds and signs/symptoms of TMD in adolescents. [Subjects] Adolescents between 14 and 18 years of age. [Methods] The participants were evaluated using the Helkimo Index and a clinical examination to track clicking sounds, parafunctional habits and other signs/symptoms of temporomandibular disorder (TMD). Subjects were classified according to the presence or absence of headache, type of occlusion, facial pattern and type of bite. In statistical analyse we used the chi-square test and Fisher's exact test, with a level of significance of 5%. [Results] The sample was made up of 81 adolescents with a mean age of 15.64 years; 51.9% were male. The prevalence of signals/symptoms of TMD was 74.1%, predominantly affecting females. Signals/symptoms of TMD were significantly associated with clicking sounds, headache and nail biting. No associations were found between signals/symptoms of TMD and angle classification, type of bite and facial pattern. [Conclusion] Headache is one of the most closely associated symptoms of TMD. Clicking sounds were found in the majority of cases. Therefore, the sum of two or more factors may be necessary for the onset and perpetuation of TMD.

  6. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  7. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  8. Discrimination of fundamental frequency of synthesized vowel sounds in a noise background

    NARCIS (Netherlands)

    Scheffers, M.T.M.

    1984-01-01

    An experiment was carried out, investigating the relationship between the just noticeable difference of fundamental frequency (jndf0) of three stationary synthesized vowel sounds in noise and the signal-to-noise ratio. To this end the S/N ratios were measured at which listeners could just

  9. Social sciences in Puget Sound recovery

    Science.gov (United States)

    Katharine F. Wellman; Kelly Biedenweg; Kathleen Wolf

    2014-01-01

    Advancing the recovery of large-scale ecosystems, such as the Puget Sound inWashington State, requires improved knowledge of the interdependencies between nature and humans in that basin region. As Biedenweg et al. (this issue) illustrate, human wellbeing and human behavior do not occur independently of the biophysical environment. Natural environments contribute to...

  10. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  11. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  12. Judging the similarity of soundscapes does not require categorization: evidence from spliced stimuli.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Defreville, Boris

    2009-04-01

    This study uses an audio signal transformation, splicing, to create an experimental situation where human listeners judge the similarity of audio signals, which they cannot easily categorize. Splicing works by segmenting audio signals into 50-ms frames, then shuffling and concatenating these frames back in random order. Splicing a signal masks the identification of the categories that it normally elicits: For instance, human participants cannot easily identify the sound of cars in a spliced recording of a city street. This study compares human performance on both normal and spliced recordings of soundscapes and music. Splicing is found to degrade human similarity performance significantly less for soundscapes than for music: When two spliced soundscapes are judged similar to one another, the original recordings also tend to sound similar. This establishes that humans are capable of reconstructing consistent similarity relations between soundscapes without relying much on the identification of the natural categories associated with such signals, such as their constituent sound sources. This finding contradicts previous literature and points to new ways to conceptualize the different ways in which humans perceive soundscapes and music.

  13. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  14. Timbral aspects of reproduced sound in small rooms. I

    DEFF Research Database (Denmark)

    Bech, Søren

    1995-01-01

    , has been simulated using an electroacoustic setup. The model included the direct sound, 17 individual reflections, and the reverberant field. The threshold of detection and just-noticeable differences for an increase in level were measured for individual reflections using eight subjects for noise......This paper reports some of the influences of individual reflections on the timbre of reproduced sound. A single loudspeaker with frequency-independent directivity characteristics, positioned in a listening room of normal size with frequency-independent absorption coefficients of the room surfaces...... and speech. The results have shown that the first-order floor and ceiling reflections are likely to individually contribute to the timbre of reproduced speech. For a noise signal, additional reflections from the left sidewall will contribute individually. The level of the reverberant field has been found...

  15. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  16. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. 46 CFR 167.20-17 - Bilge pumps, bilge piping and sounding arrangements.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Bilge pumps, bilge piping and sounding arrangements. 167... Ships § 167.20-17 Bilge pumps, bilge piping and sounding arrangements. The number, capacity, and arrangement of bilge pumps and bilge piping shall be in accordance with the requirements for cargo vessels...

  18. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection

    Directory of Open Access Journals (Sweden)

    Shih-Hong Li

    2017-01-01

    Full Text Available In the clinic, the wheezing sound is usually considered as an indicator symptom to reflect the degree of airway obstruction. The auscultation approach is the most common way to diagnose wheezing sounds, but it subjectively depends on the experience of the physician. Several previous studies attempted to extract the features of breathing sounds to detect wheezing sounds automatically. However, there is still a lack of suitable monitoring systems for real-time wheeze detection in daily life. In this study, a wearable and wireless breathing sound monitoring system for real-time wheeze detection was proposed. Moreover, a breathing sounds analysis algorithm was designed to continuously extract and analyze the features of breathing sounds to provide the objectively quantitative information of breathing sounds to professional physicians. Here, normalized spectral integration (NSI was also designed and applied in wheeze detection. The proposed algorithm required only short-term data of breathing sounds and lower computational complexity to perform real-time wheeze detection, and is suitable to be implemented in a commercial portable device, which contains relatively low computing power and memory. From the experimental results, the proposed system could provide good performance on wheeze detection exactly and might be a useful assisting tool for analysis of breathing sounds in clinical diagnosis.

  19. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  20. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  1. Application of Carbon Nanotube Assemblies for Sound Generation and Heat Dissipation

    Science.gov (United States)

    Kozlov, Mikhail; Haines, Carter; Oh, Jiyoung; Lima, Marcio; Fang, Shaoli

    2011-03-01

    Nanotech approaches were explored for the efficient transformation of an electrical signal into sound, heat, cooling action, and mechanical strain. The studies are based on the aligned arrays of multi-walled carbon nanotubes (MWNT forests) that can be grown on various substrates using a conventional CVD technique. They form a three-dimensional conductive network that possesses uncommon electrical, thermal, acoustic and mechanical properties. When heated with an alternating current or a near-IR laser modulated in 0.01--20 kHz range, the nanotube forests produce loud, audible sound. High generated sound pressure and broad frequency response (beyond 20 kHz) show that the forests act as efficient thermo-acoustic (TA) transducers. They can generate intense third and fourth TA harmonics that reveal peculiar interference-like patterns from ac-dc voltage scans. A strong dependence of the patterns on forest height can be used for characterization of carbon nanotube assemblies and for evaluation of properties of thermal interfaces. Because of good coupling with surrounding air, the forests provide excellent dissipation of heat produced by IC chips. Thermoacoustic converters based on forests can be used for thermo- and photo-acoustic sound generation, amplification and noise cancellation.

  2. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  3. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  4. 77 FR 36260 - Proposed Information Collection; Comment Request; Puget Sound Recreational Shellfish Harvesting...

    Science.gov (United States)

    2012-06-18

    ... Collection; Comment Request; Puget Sound Recreational Shellfish Harvesting Project AGENCY: National Oceanic..., as required by the Paperwork Reduction Act of 1995. DATES: Written comments must be submitted on or... for a new collection of information. The Puget Sound estuary provides one of the most valuable...

  5. Sub-Audible Speech Recognition Based upon Electromyographic Signals

    Science.gov (United States)

    Jorgensen, Charles C. (Inventor); Lee, Diana D. (Inventor); Agabon, Shane T. (Inventor)

    2012-01-01

    Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.

  6. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  7. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  8. Concepts for evaluation of sound insulation of dwellings - from chaos to consensus?

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  9. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  10. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  11. The relationship between sound insulation and acoustic quality in dwellings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    1998-01-01

    to another, however, several of the results show a slope around 4 % per dB. The results may be used to evaluate the acoustic quality level of a certain set of sound insulation requirements, or they may be used as a basis for specifying the desired acoustic quality of future buildings.......During the years there have been several large field investigations in different countries with the aim to find a relationship between sound insulation between dwellings and the subjective degree of annoyance. This paper presents an overview of the results, and the difficulties in comparing...... the different findings are discussed. It is tried to establish dose-response relationships between airborne sound insulation or impact sound pressure level according to ISO 717 and the percentage of people being annoyed by noise from neighbours. The slopes of the dose-response curves vary from one investigation...

  12. Generation and control of sound bullets with a nonlinear acoustic lens.

    Science.gov (United States)

    Spadoni, Alessandro; Daraio, Chiara

    2010-04-20

    Acoustic lenses are employed in a variety of applications, from biomedical imaging and surgery to defense systems and damage detection in materials. Focused acoustic signals, for example, enable ultrasonic transducers to image the interior of the human body. Currently however the performance of acoustic devices is limited by their linear operational envelope, which implies relatively inaccurate focusing and low focal power. Here we show a dramatic focusing effect and the generation of compact acoustic pulses (sound bullets) in solid and fluid media, with energies orders of magnitude greater than previously achievable. This focusing is made possible by a tunable, nonlinear acoustic lens, which consists of ordered arrays of granular chains. The amplitude, size, and location of the sound bullets can be controlled by varying the static precompression of the chains. Theory and numerical simulations demonstrate the focusing effect, and photoelasticity experiments corroborate it. Our nonlinear lens permits a qualitatively new way of generating high-energy acoustic pulses, which may improve imaging capabilities through increased accuracy and signal-to-noise ratios and may lead to more effective nonintrusive scalpels, for example, for cancer treatment.

  13. Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction.

    Science.gov (United States)

    Ricketts, Todd A; Hornsby, Benjamin W Y

    2005-05-01

    This brief report discusses the affect of digital noise reduction (DNR) processing on aided speech recognition and sound quality measures in 14 adults fitted with a commercial hearing aid. Measures of speech recognition and sound quality were obtained in two different speech-in-noise conditions (71 dBA speech, +6 dB SNR and 75 dBA speech, +1 dB SNR). The results revealed that the presence or absence of DNR processing did not impact speech recognition in noise (either positively or negatively). Paired comparisons of sound quality for the same speech in noise signals, however, revealed a strong preference for DNR processing. These data suggest that at least one implementation of DNR processing is capable of providing improved sound quality, for speech in noise, in the absence of improved speech recognition.

  14. Notch signal reception is required in vascular smooth muscle cells for ductus arteriosus closure

    Science.gov (United States)

    Krebs, Luke T.; Norton, Christine R.; Gridley, Thomas

    2017-01-01

    Summary The ductus arteriosus is an arterial vessel that shunts blood flow away from the lungs during fetal life, but normally occludes after birth to establish the adult circulation pattern. Failure of the ductus arteriosus to close after birth is termed patent ductus arteriosus, and is one of the most common congenital heart defects. Our previous work demonstrated that vascular smooth muscle cell expression of the Jag1 gene, which encodes a ligand for Notch family receptors, is essential for postnatal closure of the ductus arteriosus in mice. However, it was not known what cell population was responsible for receiving the Jag1-mediated signal. Here we show, using smooth muscle cell-specific deletion of the Rbpj gene, which encodes a transcription factor that mediates all canonical Notch signaling, that Notch signal reception in the vascular smooth muscle cell compartment is required for ductus arteriosus closure. These data indicate that homotypic vascular smooth muscle cell interactions are required for proper contractile smooth muscle cell differentiation and postnatal closure of the ductus arteriosus in mice. PMID:26742650

  15. Photoacoustic signal and noise analysis for Si thin plate: signal correction in frequency domain.

    Science.gov (United States)

    Markushev, D D; Rabasović, M D; Todorović, D M; Galović, S; Bialkowski, S E

    2015-03-01

    Methods for photoacoustic signal measurement, rectification, and analysis for 85 μm thin Si samples in the 20-20 000 Hz modulation frequency range are presented. Methods for frequency-dependent amplitude and phase signal rectification in the presence of coherent and incoherent noise as well as distortion due to microphone characteristics are presented. Signal correction is accomplished using inverse system response functions deduced by comparing real to ideal signals for a sample with well-known bulk parameters and dimensions. The system response is a piece-wise construction, each component being due to a particular effect of the measurement system. Heat transfer and elastic effects are modeled using standard Rosencweig-Gersho and elastic-bending theories. Thermal diffusion, thermoelastic, and plasmaelastic signal components are calculated and compared to measurements. The differences between theory and experiment are used to detect and correct signal distortion and to determine detector and sound-card characteristics. Corrected signal analysis is found to faithfully reflect known sample parameters.

  16. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  17. A comparison between swallowing sounds and vibrations in patients with dysphagia

    Science.gov (United States)

    Movahedi, Faezeh; Kurosu, Atsuko; Coyle, James L.; Perera, Subashan

    2017-01-01

    The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. The aim of this study is to present a broad comparison of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who underwent videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5ml spoon was given to the participants. A comprehensive set of features was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies. PMID:28495001

  18. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  19. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    Science.gov (United States)

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  20. Sound, music and gender in mobile games

    DEFF Research Database (Denmark)

    Machin, David; Van Leeuwen, T.

    2016-01-01

    resource, they can communicate very specific meanings and carry ideologies. In this paper, using multimodal critical discourse analysis, we analyse the sounds and music in two proto-games that are played on mobile devices: Genie Palace Divine and Dragon Island Race. While visually the two games are highly...... and impersonal and specific kinds of social relations which, we show, is highly gendered. It can also signal priorities, ideas and values, which in both cases, we show, relate to a world where there is simply no time to stop and think. © 2016, equinox publishing....

  1. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  2. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  3. Effects of stratification and fluctuations on sound propagation in the deep ocean

    International Nuclear Information System (INIS)

    March, R.H.

    1979-01-01

    It is noted that even in a homogeneous ocean, the effects of non-thermal noise and sound absorption limit the maximum effective range of detection of acoustic signals from particle cascades to distances of 2 to 10 kilometers, depending on the surface conditions prevailing and the directional characteristics of the detector. In the present paper, the effects of stratification and fluctuations in the sound velocity profile in the deep ocean over distances of this order are examined. Attention is given to two effects of potential significance, refraction and scintillation. It is found that neither effect has any significant consequences at ranges of less than 10 km

  4. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  5. [Study of biometric identification of heart sound base on Mel-Frequency cepstrum coefficient].

    Science.gov (United States)

    Chen, Wei; Zhao, Yihua; Lei, Sheng; Zhao, Zikai; Pan, Min

    2012-12-01

    Heart sound is a physiological parameter with individual characteristics generated by heart beat. To do the individual classification and recognition, in this paper, we present our study of using wavelet transform in the signal denoising, with the Mel-Frequency cepstrum coefficients (MFCC) as the feature parameters, and propose a research of reducing the dimensionality through principal components analysis (PCA). We have done the preliminary study to test the feasibility of biometric identification method using heart sound. The results showed that under the selected experimental conditions, the system could reach a 90% recognition rate. This study can provide a reference for further research.

  6. Identification of Bearing Failure Using Signal Vibrations

    Science.gov (United States)

    Yani, Irsyadi; Resti, Yulia; Burlian, Firmansyah

    2018-04-01

    Vibration analysis can be used to identify damage to mechanical systems such as journal bearings. Identification of failure can be done by observing the resulting vibration spectrum by measuring the vibration signal occurring in a mechanical system Bearing is one of the engine elements commonly used in mechanical systems. The main purpose of this research is to monitor the bearing condition and to identify bearing failure on a mechanical system by observing the resulting vibration. Data collection techniques based on recordings of sound caused by the vibration of the mechanical system were used in this study, then created a database system based bearing failure due to vibration signal recording sounds on a mechanical system The next step is to group the bearing damage by type based on the databases obtained. The results show the percentage of success in identifying bearing damage is 98 %.

  7. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  8. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  9. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  10. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  11. SoundScapes - Beyond Interaction... in search of the ultimate human-centred interface

    DEFF Research Database (Denmark)

    Brooks, Tony

    2006-01-01

    that can also benefit communication. To achieve this a new generation of intuitive natural interfaces will be required and SoundScapes (see below) is a step toward this goal to discover the ultimate interface for matching the human experience to technology. Emergent hypothesis that have developed...... as a result of the SoundScapes research will be discussed. Introduction to SoundScapes SoundScapes is a contemporary art concept that has become widely known as an interdisciplinary platform for knowledge exchange, innovative product creation of creative and scientific work that uses non-invasive sensor...... Resonance. The multimedia content is adaptable so that the environment is tailored for each participant according to a user profile. This full body movement or the smallest of gesture results in human data input to SoundScapes. The same technology that enables this empowerment is used for performance art...

  12. Suppression of grasshopper sound production by nitric oxide-releasing neurons of the central complex

    Science.gov (United States)

    Weinrich, Anja; Kunst, Michael; Wirmer, Andrea; Holstein, Gay R.

    2008-01-01

    The central complex of acridid grasshoppers integrates sensory information pertinent to reproduction-related acoustic communication. Activation of nitric oxide (NO)/cyclic GMP-signaling by injection of NO donors into the central complex of restrained Chorthippus biguttulus females suppresses muscarine-stimulated sound production. In contrast, sound production is released by aminoguanidine (AG)-mediated inhibition of nitric oxide synthase (NOS) in the central body, suggesting a basal release of NO that suppresses singing in this situation. Using anti-citrulline immunocytochemistry to detect recent NO production, subtypes of columnar neurons with somata located in the pars intercerebralis and tangential neurons with somata in the ventro-median protocerebrum were distinctly labeled. Their arborizations in the central body upper division overlap with expression patterns for NOS and with the site of injection where NO donors suppress sound production. Systemic application of AG increases the responsiveness of unrestrained females to male calling songs. Identical treatment with the NOS inhibitor that increased male song-stimulated sound production in females induced a marked reduction of citrulline accumulation in central complex columnar and tangential neurons. We conclude that behavioral situations that are unfavorable for sound production (like being restrained) activate NOS-expressing central body neurons to release NO and elevate the behavioral threshold for sound production in female grasshoppers. PMID:18574586

  13. Flights of fear: a mechanical wing whistle sounds the alarm in a flocking bird.

    Science.gov (United States)

    Hingee, Mae; Magrath, Robert D

    2009-12-07

    Animals often form groups to increase collective vigilance and allow early detection of predators, but this benefit of sociality relies on rapid transfer of information. Among birds, alarm calls are not present in all species, while other proposed mechanisms of information transfer are inefficient. We tested whether wing sounds can encode reliable information on danger. Individuals taking off in alarm fly more quickly or ascend more steeply, so may produce different sounds in alarmed than in routine flight, which then act as reliable cues of alarm, or honest 'index' signals in which a signal's meaning is associated with its method of production. We show that crested pigeons, Ocyphaps lophotes, which have modified flight feathers, produce distinct wing 'whistles' in alarmed flight, and that individuals take off in alarm only after playback of alarmed whistles. Furthermore, amplitude-manipulated playbacks showed that response depends on whistle structure, such as tempo, not simply amplitude. We believe this is the first demonstration that flight noise can send information about alarm, and suggest that take-off noise could provide a cue of alarm in many flocking species, with feather modification evolving specifically to signal alarm in some. Similar reliable cues or index signals could occur in other animals.

  14. Airborne sound insulation descriptors in the Nordic building regulations - Overview special rules and benefits of changing descriptors

    DEFF Research Database (Denmark)

    Helimäki, Heikki; Rasmussen, Birgit

    2010-01-01

    All Nordic countries have sound insulation requirements specified in the building regulations or in sound classification schemes, Class C, referred to in the regulations and published as national standards, which all originate from a common Nordic INSTA-B proposal from the 90’s, thus having a lot...... insulation requirements and is related to an equivalent paper about impact sound insulation requirements. The papers also describe the major benefits of reducing the number of special rules and of changing descriptors to those which best support protection of the residents and development of the building....... These national rules are not easy to find, unless all details of standards and other documents are known and studied carefully, and they cause problems since the building industry is not national anymore. This paper gives an overview of special national rules in the Nordic countries regarding airborne sound...

  15. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  16. Impact sound insulation improvement of wooden floors on concrete slabs

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Hoffmeyer, Dan; Hansen, Rói

    2014-01-01

    renovating housing. In Denmark, there are about 1 million dwellings in multi-storey housing. About half of the dwellings are built with timber floors, and the other half with wooden floors on concrete slabs, either in-situ cast or prefabricated hollow-core elements. In a project including mapping of sound......Improvement of impact sound insulation is one of the major challenges, when renovating housing. In Denmark, building regulations for impact sound in new-build were strengthened 5 dB in 2008, implying a main requirement L’n,w ≤ 53 dB between dwellings. The same value should also be a goal, when...... insulation in the Danish housing stock and investigation of improvement possibilities, a pilot laboratory study of wooden floors on concrete was carried out. The laboratory study included impact sound improvement measurements of full-scale samples (10 m2) fulfilling the conditions in EN ISO 10140...

  17. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  18. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  19. Sound design for diesel passenger cars

    Energy Technology Data Exchange (ETDEWEB)

    Belluscio, Michele; Ruotolo, Romualdo [GM Powertrain Europe, Torino (Italy); Schoenherr, Christian; Schuster, Guenter [GM Europe, Ruesselsheim (Germany); Eisele, Georg; Genender, Peter; Wolff, Klaus; Van Keymeulen, Johan [FEV Motorentechnik GmbH, Aachen (Germany)

    2008-07-01

    With the growing market contribution of diesel passenger cars in Europe, it becomes more important to create a brand and market segment specific vehicle sound. Beside the usually considered pleasantness related topics like diesel knocking and high noise excitation, it is important to fulfil also the requirements regarding a dynamic vehicle impression. This impression is mainly influenced by the load dependency of the engine induced noise, which is reduced for diesel engines due to the missing throttle valve and the damping effect of the turbocharger and the diesel particulate filter. By means of a detailed noise transfer path analysis the contribution with dynamic potential can be identified. Furthermore the load dependency itself of a certain noise contribution can be strengthened, which allows for a dynamic sound character comparable to sporty gasoline vehicles. (orig.)

  20. Notch signaling activation in human embryonic stem cells is required for embryonic but not trophoblastic lineage commitment

    OpenAIRE

    Yu, Xiaobing; Zou, Jizhong; Ye, Zhaohui; Hammond, Holly; Chen, Guibin; Tokunaga, Akinori; Mali, Prashant; Li, Yue-Ming; Civin, Curt; Gaiano, Nicholas; Cheng, Linzhao

    2008-01-01

    The Notch signaling pathway plays important roles in cell fate determination during embryonic development and adult life. In this study, we focus on the role of Notch signaling in governing cell fate choices in human embryonic stem (hES) cells. Using genetic and pharmacological approaches, we achieved both blockade and conditional activation of Notch signaling in several hES cell lines. We report here that activation of Notch signaling is required for undifferentiated hES cells to form the pr...

  1. Signals, processes, and systems an interactive multimedia introduction to signal processing

    CERN Document Server

    Karrenberg, Ulrich

    2013-01-01

    This is a very new concept for learning Signal Processing, not only from the physically-based scientific fundamentals, but also from the didactic perspective, based on modern results of brain research. The textbook together with the DVD form a learning system that provides investigative studies and enables the reader to interactively visualize even complex processes. The unique didactic concept is built on visualizing signals and processes on the one hand, and on graphical programming of signal processing systems on the other. The concept has been designed especially for microelectronics, computer technology and communication. The book allows to develop, modify, and optimize useful applications using DasyLab - a professional and globally supported software for metrology and control engineering. With the 3rd edition, the software is also suitable for 64 bit systems running on Windows 7. Real signals can be acquired, processed and played on the sound card of your computer. The book provides more than 200 pre-pr...

  2. Production of grooming-associated sounds by chimpanzees (Pan troglodytes) at Ngogo: variation, social learning, and possible functions.

    Science.gov (United States)

    Watts, David P

    2016-01-01

    Chimpanzees (Pan troglodytes) use some communicative signals flexibly and voluntarily, with use influenced by learning. These signals include some vocalizations and also sounds made using the lips, oral cavity, and/or teeth, but not the vocal tract, such as "attention-getting" sounds directed at humans by captive chimpanzees and lip smacking during social grooming. Chimpanzees at Ngogo, in Kibale National Park, Uganda, make four distinct sounds while grooming others. Here, I present data on two of these ("splutters" and "teeth chomps") and consider whether social learning contributes to variation in their production and whether they serve social functions. Higher congruence in the use of these two sounds between dyads of maternal relatives than dyads of non-relatives implies that social learning occurs and mostly involves vertical transmission, but the results are not conclusive and it is unclear which learning mechanisms may be involved. In grooming between adult males, tooth chomps and splutters were more likely in long than in short bouts; in bouts that were bidirectional rather than unidirectional; in grooming directed toward high-ranking males than toward low-ranking males; and in bouts between allies than in those between non-allies. Males were also more likely to make these sounds while they were grooming other males than while they were grooming females. These results are expected if the sounds promote social bonds and induce tolerance of proximity and of grooming by high-ranking males. However, the alternative hypothesis that the sounds are merely associated with motivation to groom, with no additional social function, cannot be ruled out. Limited data showing that bouts accompanied by teeth chomping or spluttering at their initiation were longer than bouts for which this was not the case point toward a social function, but more data are needed for a definitive test. Comparison to other research sites shows that the possible existence of grooming

  3. ``Hiss, clicks and pops'' - The enigmatic sounds of meteors

    Science.gov (United States)

    Finnegan, J. A.

    2015-04-01

    The improbability of sounds heard simultaneously with meteors allows the phenomenon to remain on the margins of scientific interest and research. This is unjustified, since these audibly perceived electric field effects indicate complex, inconsistent and still unresolved electric-magnetic coupling and charge dynamics; interacting between the meteor; the ionosphere and mesosphere; stratosphere; troposphere and the surface of the earth. This paper reviews meteor acoustic effects, presents illustrating reports and hypotheses and includes a summary of similar and additional phenomena observed during the 2013 February 15 asteroid fragment disintegration above the Russian district of Chelyabinsk. An augmenting theory involving near ground, non uniform electric field production of Ozone, as a stimulated geo-physical phenomenon to explain some hissing `meteor sounds' is suggested in section 2.2. Unlike previous theories, electric-magnetic field fluctuation rates are not required to occur in the audio frequency range for this process to acoustically emit hissing and intermittent impulsive sounds; removing the requirements of direct conversion, passive human transduction or excited, localised acoustic `emitters'. Links to the Armagh Observatory All-sky meteor cameras, electrophonic meteor research and full construction plans for an extremely low frequency (ELF) receiver are also included.

  4. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  5. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  6. An estimation method for echo signal energy of pipe inner surface longitudinal crack detection by 2-D energy coefficients integration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn; Cao, Xiandong, E-mail: redaple@bit.edu.cn; Cui, Liming, E-mail: redaple@bit.edu.cn; Xiao, Dingguo, E-mail: redaple@bit.edu.cn [School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China NO.5 Zhongguancun South Street, Haidian District, Beijing 100081 (China)

    2015-03-31

    The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.

  7. Gefinex 400S (Sampo) EM-soundings at Olkiluoto 2008

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2008-09-01

    In the beginning of June 2008 Geological Survey of Finland (GTK) carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The same soundings sites were first time measured and marked in 2004 and has been repeated after it yearly in the same season. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. Because of the strong electromagnetic noise all planned sites (48) could not be measured. In 2008 the measurements were performed at the sites that were successful in 2007 (43 soundings). The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the signal/noise also with long coil separations and the repeatability of the results is reasonably good. However, most suitable for monitoring purposes are the sites without strong surficial 3D effects. Comparison of the results of 2004 to 2008 surveys shows differences on some ARD (Apparent resistivity-depth) curves. Those are mainly results of the modified man-made structures. The effects of changes in groundwater conditions are obviously slight. (orig.)

  8. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    Science.gov (United States)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  9. Sound from charged particles in liquids

    International Nuclear Information System (INIS)

    Askar'yan, G.A.

    1980-01-01

    Two directions of sound application appearing during the charged particles passing through liquid - in biology and for charged particles registration are considered. Application of this sound in radiology is determined by a contribution of its hypersound component (approximately 10 9 Hz) to radiology effect of ionizing radiation on micro-organisms and cells. Large amplitudes and pressure gradients in a hypersound wave have a pronounced destructive breaking effect on various microobjects (cells, bacteria, viruses). An essential peculiarity of these processes is the possibility of control by choosing conditions changing hypersound generation, propagation and effect. This fact may lead not only to the control by radiaiton effects but also may explain and complete the analogy of ionizing radiation and ultrasound effect on bioobjects. The second direction is acoustic registration of passing ionizing particles. It is based on the possibility of guaranteed signal reception from a shower with 10 15 -10 16 eV energy in water at distances of hundreds of meters. Usage of acoustic technique for neutrino registration in the DUMAND project permits to use a detecting volume of water with a mass of 10 9 t and higher

  10. Use of the Kalman filter in signal processing to reduce beam requirements for alpha-particle diagnostics

    International Nuclear Information System (INIS)

    Cooper, W.S.

    1986-01-01

    Several techniques proposed for diagnosing the velocity distribution of fast alpha-particles in a burning plasma require the injection of a beam of fast neutral atoms as probes. The author discusses how improving signal detection techniques is a high leverage factor in reducing the cost of the diagnostic beam. Optimal estimation theory provides a computational algorithm, the Kalman filter, that can optimally estimate the amplitude of a signal with arbitrary (but known) time dependence in the presence of noise. In one example presented, based on a square-wave signal and assumed noise levels, the Kalman filter achieves an enhancement of signal detection efficiency of about a factor of 10 (as compared with the straightforward observation of the signal superimposed on noise) with an observation time of 100 signal periods

  11. Airborne and impact sound transmission in super-light structures

    DEFF Research Database (Denmark)

    Christensen, Jacob Ellehauge; Hertz, Kristian Dahl; Brunskog, Jonas

    2011-01-01

    -aggregate concrete. A super-light deck element is developed. It is intended to be lighter than traditional deck structures without compromising the acoustic performance. It is primarily the airborne sound insulation, which is of interest as the requirements for the impact sound insulation to a higher degree can...... be fulfilled by external means such as floorings. The acoustical performance of the slab element is enhanced by several factors. Load carrying internal arches stiffens the element. This causes a decrease in the modal density, which is further improved by the element being lighter. These parameters also...

  12. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Creating and Exploring Huge Parameter Spaces: Interactive Evolution as a Tool for Sound Generation

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2001-01-01

    In this paper, a program is presented that applies interactive evolution to sound generation, i.e., preferred individuals are repeatedly selected from a population of genetically bred sound objects, created with various synthesis and pattern generation algorithms. This simplifies aural exploration...... applications. It is also shown how this technique can be used to simplify sound design in standard hardware synthesizers, a task normally avoided by most musicians, due to the required amount of technical understanding....

  14. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  15. Ihh signaling is directly required for the osteoblast lineage in the endochondral skeleton.

    Science.gov (United States)

    Long, Fanxin; Chung, Ung-il; Ohba, Shinsuke; McMahon, Jill; Kronenberg, Henry M; McMahon, Andrew P

    2004-03-01

    Indian hedgehog (Ihh) is indispensable for development of the osteoblast lineage in the endochondral skeleton. In order to determine whether Ihh is directly required for osteoblast differentiation, we have genetically manipulated smoothened (Smo), which encodes a transmembrane protein that is essential for transducing all Hedgehog (Hh) signals. Removal of Smo from perichondrial cells by the Cre-LoxP approach prevents formation of a normal bone collar and also abolishes development of the primary spongiosa. Analysis of chimeric embryos composed of wild-type and Smo(n/n) cells indicates that Smo(n/n) cells fail to contribute to osteoblasts in either the bone collar or the primary spongiosa but generate ectopic chondrocytes. In order to assess whether Ihh is sufficient to induce bone formation in vivo, we have analyzed the bone collar in the long bones of embryos in which Ihh was artificially expressed in all chondrocytes by the UAS-GAL4 bigenic system. Although ectopic Ihh does not induce overt ossification along the entire cartilage anlage, it promotes progression of the bone collar toward the epiphysis, suggesting a synergistic effect between ectopic Ihh and endogenous factors such as the bone morphogenetic proteins (BMPs). In keeping with this model, Hh signaling is further found to be required in BMP-induced osteogenesis in cultures of a limb-bud cell line. Taken together, these results demonstrate that Ihh signaling is directly required for the osteoblast lineage in the developing long bones and that Ihh functions in conjunction with other factors such as BMPs to induce osteoblast differentiation. We suggest that Ihh acts in vivo on a potential progenitor cell to promote osteoblast and prevent chondrocyte differentiation.

  16. An integrative time-varying frequency detection and channel sounding method for dynamic plasma sheath

    Science.gov (United States)

    Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming

    2018-01-01

    The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.

  17. Reproduction-Related Sound Production of Grasshoppers Regulated by Internal State and Actual Sensory Environment

    Science.gov (United States)

    Heinrich, Ralf; Kunst, Michael; Wirmer, Andrea

    2012-01-01

    The interplay of neural and hormonal mechanisms activated by entero- and extero-receptors biases the selection of actions by decision making neuronal circuits. The reproductive behavior of acoustically communicating grasshoppers, which is regulated by short-term neural and longer-term hormonal mechanisms, has frequently been used to study the cellular and physiological processes that select particular actions from the species-specific repertoire of behaviors. Various grasshoppers communicate with species- and situation-specific songs in order to attract and court mating partners, to signal reproductive readiness, or to fend off competitors. Selection and coordination of type, intensity, and timing of sound signals is mediated by the central complex, a highly structured brain neuropil known to integrate multimodal pre-processed sensory information by a large number of chemical messengers. In addition, reproductive activity including sound production critically depends on maturation, previous mating experience, and oviposition cycles. In this regard, juvenile hormone released from the corpora allata has been identified as a decisive hormonal signal necessary to establish reproductive motivation in grasshopper females. Both regulatory systems, the central complex mediating short-term regulation and the corpora allata mediating longer-term regulation of reproduction-related sound production mutually influence each other’s activity in order to generate a coherent state of excitation that promotes or suppresses reproductive behavior in respective appropriate or inappropriate situations. This review summarizes our current knowledge about extrinsic and intrinsic factors that influence grasshopper reproductive motivation, their representation in the nervous system and their integrative processing that mediates the initiation or suppression of reproductive behaviors. PMID:22737107

  18. Reproduction-related sound production of grasshoppers regulated by internal state and actual sensory environment

    Directory of Open Access Journals (Sweden)

    Ralf eHeinrich

    2012-06-01

    Full Text Available The interplay of neural and hormonal mechanisms activated by entero- and exteroreceptors biases the selection of actions by decision making neuronal circuits. The reproductive behaviour of acoustically communicating grasshoppers, which is regulated by short-term neural and longer-term hormonal mechanisms, has frequently been used to study the cellular and physiological processes that select particular actions from the species-specific repertoire of behaviours. Various grasshoppers communicate with species- and situation-specific songs in order to attract and court mating partners, to signal reproductive readiness or to fend off competitors. Selection and coordination of type, intensity and timing of sound signals is mediated by the central complex, a highly structured brain neuropil known to integrate multimodal pre-processed sensory information by a large number of chemical messengers. In addition, reproductive activity including sound production critically depends on maturation, previous mating experience and oviposition cycles. In this regard, juvenile hormone released from the corpora allata has been identified as a decisive hormonal signal necessary to establish reproductive motivation in grasshopper females. Both regulatory systems, the central complex mediating short-term regulation and the corpora allata mediating longer-term regulation of reproduction related sound production mutually influence each other’s activity in order to generate a coherent state of excitation that promotes or suppresses reproductive behaviour in respective appropriate or inappropriate situations.This review summarizes our current knowledge about extrinsic and intrinsic factors that influence grasshopper reproductive motivation, their representation in the nervous system and their integrative processing that mediates the initiation or suppression of reproductive behaviors.

  19. Drosophila Nociceptive Sensitization Requires BMP Signaling via the Canonical SMAD Pathway.

    Science.gov (United States)

    Follansbee, Taylor L; Gjelsvik, Kayla J; Brann, Courtney L; McParland, Aidan L; Longhurst, Colin A; Galko, Michael J; Ganter, Geoffrey K

    2017-08-30

    Nociceptive sensitization is a common feature in chronic pain, but its basic cellular mechanisms are only partially understood. The present study used the Drosophila melanogaster model system and a candidate gene approach to identify novel components required for modulation of an injury-induced nociceptive sensitization pathway presumably downstream of Hedgehog. This study demonstrates that RNAi silencing of a member of the Bone Morphogenetic Protein (BMP) signaling pathway, Decapentaplegic (Dpp), specifically in the Class IV multidendritic nociceptive neuron, significantly attenuated ultraviolet injury-induced sensitization. Furthermore, overexpression of Dpp in Class IV neurons was sufficient to induce thermal hypersensitivity in the absence of injury. The requirement of various BMP receptors and members of the SMAD signal transduction pathway in nociceptive sensitization was also demonstrated. The effects of BMP signaling were shown to be largely specific to the sensitization pathway and not associated with changes in nociception in the absence of injury or with changes in dendritic morphology. Thus, the results demonstrate that Dpp and its pathway play a crucial and novel role in nociceptive sensitization. Because the BMP family is so strongly conserved between vertebrates and invertebrates, it seems likely that the components analyzed in this study represent potential therapeutic targets for the treatment of chronic pain in humans. SIGNIFICANCE STATEMENT This report provides a genetic analysis of primary nociceptive neuron mechanisms that promote sensitization in response to injury. Drosophila melanogaster larvae whose primary nociceptive neurons were reduced in levels of specific components of the BMP signaling pathway, were injured and then tested for nocifensive responses to a normally subnoxious stimulus. Results suggest that nociceptive neurons use the BMP2/4 ligand, along with identified receptors and intracellular transducers to transition to a

  20. Refining Lane-Based Traffic Signal Settings to Satisfy Spatial Lane Length Requirements

    Directory of Open Access Journals (Sweden)

    Yanping Liu

    2017-01-01

    Full Text Available In conventional lane-based signal optimization models, lane markings guiding road users in making turns are optimized with traffic signal settings in a unified framework to maximize the overall intersection capacity or minimize the total delay. The spatial queue requirements of road lanes should be considered to avoid overdesigns of green durations. Point queue system adopted in the conventional lane-based framework causes overflow in practice. Based on the optimization results from the original lane-based designs, a refinement is proposed to enhance the lane-based settings to ensure that spatial holding limits of the approaching traffic lanes are not exceeded. A solution heuristic is developed to modify the green start times, green durations, and cycle length by considering the vehicle queuing patterns and physical holding capacities along the approaching traffic lanes. To show the effectiveness of this traffic signal refinement, a case study of one of the busiest and most complicated intersections in Hong Kong is given for demonstration. A site survey was conducted to collect existing traffic demand patterns and existing traffic signal settings in peak periods. Results show that the proposed refinement method is effective to ensure that all vehicle queue lengths satisfy spatial lane capacity limits, including short lanes, for daily operation.

  1. A requirement for FGF signalling in the formation of primitive streak-like intermediates from primitive ectoderm in culture.

    Directory of Open Access Journals (Sweden)

    Zhiqiang Zheng

    Full Text Available BACKGROUND: Embryonic stem (ES cells hold considerable promise as a source of cells with therapeutic potential, including cells that can be used for drug screening and in cell replacement therapies. Differentiation of ES cells into the somatic lineages is a regulated process; before the promise of these cells can be realised robust and rational methods for directing differentiation into normal, functional and safe cells need to be developed. Previous in vivo studies have implicated fibroblast growth factor (FGF signalling in lineage specification from pluripotent cells. Although FGF signalling has been suggested as essential for specification of mesoderm and endoderm in vivo and in culture, the exact role of this pathway remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: Using a culture model based on early primitive ectoderm-like (EPL cells we have investigated the role of FGF signalling in the specification of mesoderm. We were unable to demonstrate any mesoderm inductive capability associated with FGF1, 4 or 8 signalling, even when the factors were present at high concentrations, nor any enhancement in mesoderm formation induced by exogenous BMP4. Furthermore, there was no evidence of alteration of mesoderm sub-type formed with addition of FGF1, 4 or 8. Inhibition of endogenous FGF signalling, however, prevented mesoderm and favoured neural differentiation, suggesting FGF signalling was required but not sufficient for the differentiation of primitive ectoderm into primitive streak-like intermediates. The maintenance of ES cell/early epiblast pluripotent marker expression was also observed in cultures when FGF signalling was inhibited. CONCLUSIONS/SIGNIFICANCE: FGF signalling has been shown to be required for the differentiation of primitive ectoderm to neurectoderm. This, coupled with our observations, suggest FGF signalling is required for differentiation of the primitive ectoderm into the germ lineages at gastrulation.

  2. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  3. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  4. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  5. Tyrosine phosphorylation and proteolytic cleavage of Notch are required for non-canonical Notch/Abl signaling in Drosophila axon guidance.

    Science.gov (United States)

    Kannan, Ramakrishnan; Cox, Eric; Wang, Lei; Kuzina, Irina; Gu, Qun; Giniger, Edward

    2018-01-17

    Notch signaling is required for the development and physiology of nearly every tissue in metazoans. Much of Notch signaling is mediated by transcriptional regulation of downstream target genes, but Notch controls axon patterning in Drosophila by local modulation of Abl tyrosine kinase signaling, via direct interactions with the Abl co-factors Disabled and Trio. Here, we show that Notch-Abl axonal signaling requires both of the proteolytic cleavage events that initiate canonical Notch signaling. We further show that some Notch protein is tyrosine phosphorylated in Drosophila , that this form of the protein is selectively associated with Disabled and Trio, and that relevant tyrosines are essential for Notch-dependent axon patterning but not for canonical Notch-dependent regulation of cell fate. Based on these data, we propose a model for the molecular mechanism by which Notch controls Abl signaling in Drosophila axons. © 2018. Published by The Company of Biologists Ltd.

  6. What ears do for bats: a comparative study of pinna sound pressure transformation in chiroptera.

    Science.gov (United States)

    Obrist, M K; Fenton, M B; Eger, J L; Schlegel, P A

    1993-07-01

    Using a moveable loudspeaker and an implanted microphone, we studied the sound pressure transformation of the external ears of 47 species of bats from 13 families. We compared pinna gain, directionality of hearing and interaural intensity differences (IID) in echolocating and non-echolocating bats, in species using different echolocation strategies and in species that depend upon prey-generated sounds to locate their targets. In the Pteropodidae, two echolocating species had slightly higher directionality than a non-echolocating species. The ears of phyllostomid and vespertilionid species showed moderate directionality. In the Mormoopidae, the ear directionality of Pteronotus parnellii clearly matched the dominant spectral component of its echolocation calls, unlike the situation in three other species. Species in the Emballonuridae, Molossidae, Rhinopomatidae and two vespertilionids that use narrow-band search-phase echolocation calls showed increasingly sharp tuning of the pinna to the main frequency of their signals. Similar tuning was most evident in Hipposideridae and Rhinolophidae, species specialized for flutter detection via Doppler-shifted echoes of high-duty-cycle narrow-band signals. The large pinnae of bats that use prey-generated sounds to find their targets supply high sound pressure gain at lower frequencies. Increasing domination of a narrow spectral band in echolocation is reflected in the passive acoustic properties of the external ears (sharper directionality). The importance of IIDs for lateralization and horizontal localization is discussed by comparing the behavioural directional performance of bats with their bioacoustical features.

  7. Performance analysis of an IMU-augmented GNSS tracking system on board the MAIUS-1 sounding rocket

    Science.gov (United States)

    Braun, Benjamin; Grillenberger, Andreas; Markgraf, Markus

    2018-05-01

    Satellite navigation receivers are adequate tracking sensors for range safety of both orbital launch vehicles and suborbital sounding rockets. Due to high accuracy and its low system complexity, satellite navigation is seen as well-suited supplement or replacement of conventional tracking systems like radar. Having the well-known shortcomings of satellite navigation like deliberate or unintentional interferences in mind, it is proposed to augment the satellite navigation receiver by an inertial measurement unit (IMU) to enhance continuity and availability of localization. The augmented receiver is thus enabled to output at least an inertial position solution in case of signal outages. In a previous study, it was shown by means of simulation using the example of Ariane 5 that the performance of a low-grade microelectromechanical IMU is sufficient to bridge expected outages of some ten seconds, and still meeting the range safety requirements in effect. In this publication, these theoretical findings shall be substantiated by real flight data that were recorded on MAIUS-1, a sounding rocket launched from Esrange, Sweden, in early 2017. The analysis reveals that the chosen representative of a microelectromechanical IMU is suitable to bridge outages of up to thirty seconds.

  8. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  9. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  10. Development of engine sound quality for passenger car with 6 cylinder engine; 6 kito engine joyosha no onshoku kazari

    Energy Technology Data Exchange (ETDEWEB)

    Iwasaki, Y; Miyamoto, K; Yamamoto, K [Toyota Motor Corp., Aichi (Japan)

    1997-10-01

    In recent years, the interior noise has been required not only to reduce the sound pressure level, but also to improve the sound quality, especially during acceleration. This paper describes the development of engine sound quality for the new model `ARISTO (GS300)` with in-line 6 gasoline engine. We used the sound simulator in order to evaluate the engine sound quality during acceleration, and decided the target sound. To attain that sound, the light-weighed Piston and connecting rod, and the improvement of intake system are adopted. 7 refs., 13 figs.

  11. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  12. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  13. Design of virtual three-dimensional instruments for sound control

    Science.gov (United States)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object

  14. Experimental implementation of a low-frequency global sound equalization method based on free field propagation

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Pedersen, Christian Sejer; Lydolf, Morten

    2007-01-01

    An experimental implementation of a global sound equalization method in a rectangular room using active control is described in this paper. The main purpose of the work has been to provide experimental evidence that sound can be equalized in a continuous three-dimensional region, the listening zone......, which occupies a considerable part of the complete volume of the room. The equalization method, based on the simulation of a progressive plane wave, was implemented in a room with inner dimensions of 2.70 m x 2.74 m x 2.40 m. With this method,the sound was reproduced by a matrix of 4 x 5 loudspeakers...... in one of the walls. After traveling through the room, the sound wave was absorbed on the opposite wall, which had a similar arrangement of loudspeakers, by means of active control. A set of 40 digital FIR filters was used to modify the original input signal before it was fed to the loudspeakers, one...

  15. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  16. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  17. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  18. Subjective preference evaluation of sound fields by performing singers

    Science.gov (United States)

    Noson, Dennis

    2003-08-01

    A model of the auditory process is proposed for performing singers, which incorporates the added signal from bone conduction, as well as the psychological distance for subjective preference of the performer from the acoustic sound field of the stage. The explanatory power of previous scientific studies of vocal stage acoustics has been limited by a lack of an underlying theory of performer preference. Ando's theory, using the autocorrelation function (ACF) for parametrizing temporal factors, was applied to interpretation of singer sound field preference determined by the pair comparison method. Melisma style singing (no lyrics) was shown to increase the preferred delay time of reflections from a mean of 14 ms with lyrics to 23 ms without (pThesis advisor: Yoichi Ando Copies of this thesis are available from the author by inquiry at BRC Acoustics, 1741 First Avenue South, Seattle, WA 98134 USA. E-mail address: dnoson@brcacoustics.com

  19. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  20. Impact sound insulation descriptors in the Nordic building regulations – Overview special rules and benefits of changing descriptors

    DEFF Research Database (Denmark)

    Hagberg, Klas; Rasmussen, Birgit

    2010-01-01

    All Nordic countries have sound insulation requirements specified in the building regulations or in sound classification schemes, Class C, referred to in the regulations and published as national standards, which all originate from a common Nordic INSTA-B proposal from the 90’s, thus having a lot...... insulation requirements and is related to an equivalent paper about airborne sound insulation requirements. The papers also describe the major benefits of reducing the number of special rules and of changing descriptors to those which best support protection of the residents and development of the building....... These national rules are not easy to find, unless all details of standards and other documents are known and studied carefully, and they cause problems since the building industry is not national anymore. This paper gives an overview of special national rules in the Nordic countries regarding impact sound...

  1. Environmental quality of Long Island Sound: Assessment and management issues

    International Nuclear Information System (INIS)

    Wolfe, D.A.; Farrow, D.R.G.; Robertson, A.; Monahan, R.; Stacey, P.E.

    1991-01-01

    Estimated pollutant loadings to Long Island Sound (LIS) are presented and discussed in the context of current information on population trends and land-use characteristics within the drainage basin of the sound. For the conventional pollutants (BOD, N, and P) and for most of the metals examined, the fluxes to LIS from wastewater treatment plants approach or exceed the fluxes from riverine sources. Urban runoff is a significant source for only a few contaminants, such as lead and petroleum hydrocarbons. Atmospheric flux estimates made for other areas are extrapolated to LIS, and this source appears to be significant for lead, zinc, and polynuclear aromatic hydrocarbons, and chlorinated pesticides. Continued population growth is projected through 2010, both in the urban centers of the western sound and in the coastal counties surrounding the central and eastern portions of LIS. This growth will place increased pollution pressure on the sound and increased demands on already scarce coastal and estuarine land-use categories. Close interaction between environmental planners, managers, and scientists is required to identify effective control strategies for reducing existing pollutant stress to the sound and for minimizing the effects of future development

  2. Gefinex 400S (SAMPO) EM-soundings at Olkiluoto 2009

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.; Korhonen, K.

    2009-09-01

    In the beginning of June 2009 Geological Survey of Finland (GTK) carried out electromagnetic (EM) frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The EM-monitoring sounding program started in 2004 and has been repeated since yearly in the same season. The aim of the study is to monitor the variations of the groundwater properties down to 500 m depth by the changes of the electric conductivity of the earth at ONKALO and repository area. The original measurement grid was based on two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The receiver and transmitter sites are marked with stakes and the profiles were measured using 200, 500, and 800 m coil separations. The measurement program was revised in 2007 and then again in 2009. Now 15 noisy soundings were removed from the program and 3 new points were selected from the area to the east from ONKALO. The new receiver/transmitter sites, called ABC-points were marked with stakes and the points were measured using transmitter-receiver separations 200, 400 and 800 meters. In 2009 the new EM-Sampo monitoring program included 28+9 soundings. The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the SN (signal to noise) ratio and the repeatability of the results is reasonably good even with long coil separations. However, most suitable for monitoring purposes are the sites without strong shallow 3D effects. Comparison of the new results to old 2004-2008 surveys shows differences on some ARD (apparent resistivity-depth) curves. Those are mainly results of the modified shallow structures. The changes in groundwater conditions based on the monitoring results seem insignificant. (orig.)

  3. Linking Cognitive and Social Aspects of Sound Change Using Agent-Based Modeling.

    Science.gov (United States)

    Harrington, Jonathan; Kleber, Felicitas; Reubold, Ulrich; Schiel, Florian; Stevens, Mary

    2018-03-26

    The paper defines the core components of an interactive-phonetic (IP) sound change model. The starting point for the IP-model is that a phonological category is often skewed phonetically in a certain direction by the production and perception of speech. A prediction of the model is that sound change is likely to come about as a result of perceiving phonetic variants in the direction of the skew and at the probabilistic edge of the listener's phonological category. The results of agent-based computational simulations applied to the sound change in progress, /u/-fronting in Standard Southern British, were consistent with this hypothesis. The model was extended to sound changes involving splits and mergers by using the interaction between the agents to drive the phonological reclassification of perceived speech signals. The simulations showed no evidence of any acoustic change when this extended model was applied to Australian English data in which /s/ has been shown to retract due to coarticulation in /str/ clusters. Some agents nevertheless varied in their phonological categorizations during interaction between /str/ and /ʃtr/: This vacillation may represent the potential for sound change to occur. The general conclusion is that many types of sound change are the outcome of how phonetic distributions are oriented with respect to each other, their association to phonological classes, and how these types of information vary between speakers that happen to interact with each other. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  4. Differences in chewing sounds of dry-crisp snacks by multivariate data analysis

    Science.gov (United States)

    De Belie, N.; Sivertsvik, M.; De Baerdemaeker, J.

    2003-09-01

    Chewing sounds of different types of dry-crisp snacks (two types of potato chips, prawn crackers, cornflakes and low calorie snacks from extruded starch) were analysed to assess differences in sound emission patterns. The emitted sounds were recorded by a microphone placed over the ear canal. The first bite and the first subsequent chew were selected from the time signal and a fast Fourier transformation provided the power spectra. Different multivariate analysis techniques were used for classification of the snack groups. This included principal component analysis (PCA) and unfold partial least-squares (PLS) algorithms, as well as multi-way techniques such as three-way PLS, three-way PCA (Tucker3), and parallel factor analysis (PARAFAC) on the first bite and subsequent chew. The models were evaluated by calculating the classification errors and the root mean square error of prediction (RMSEP) for independent validation sets. It appeared that the logarithm of the power spectra obtained from the chewing sounds could be used successfully to distinguish the different snack groups. When different chewers were used, recalibration of the models was necessary. Multi-way models distinguished better between chewing sounds of different snack groups than PCA on bite or chew separately and than unfold PLS. From all three-way models applied, N-PLS with three components showed the best classification capabilities, resulting in classification errors of 14-18%. The major amount of incorrect classifications was due to one type of potato chips that had a very irregular shape, resulting in a wide variation of the emitted sounds.

  5. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  6. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  7. GLABROUS INFLORESCENCE STEMS (GIS) is required for trichome branching through gibberellic acid signaling in Arabidopsis.

    Science.gov (United States)

    An, Lijun; Zhou, Zhongjing; Su, Sha; Yan, An; Gan, Yinbo

    2012-02-01

    Cell differentiation generally corresponds to the cell cycle, typically forming a non-dividing cell with a unique differentiated morphology, and Arabidopsis trichome is an excellent model system to study all aspects of cell differentiation. Although gibberellic acid is reported to be involved in trichome branching in Arabidopsis, the mechanism for such signaling is unclear. Here, we demonstrated that GLABROUS INFLORESCENCE STEMS (GIS) is required for the control of trichome branching through gibberellic acid signaling. The phenotypes of a loss-of-function gis mutant and an overexpressor showed that GIS acted as a repressor to control trichome branching. Our results also show that GIS is not required for cell endoreduplication, and our molecular and genetic study results have shown that GIS functions downstream of the key regulator of trichome branching, STICHEL (STI), to control trichome branching through the endoreduplication-independent pathway. Furthermore, our results also suggest that GIS controls trichome branching in Arabidopsis through two different pathways and acts either upstream or downstream of the negative regulator of gibbellic acid signaling SPINDLY (SPY).

  8. Pattern theory the stochastic analysis of real-world signals

    CERN Document Server

    Mumford, David

    2010-01-01

    Pattern theory is a distinctive approach to the analysis of all forms of real-world signals. At its core is the design of a large variety of probabilistic models whose samples reproduce the look and feel of the real signals, their patterns, and their variability. Bayesian statistical inference then allows you to apply these models in the analysis of new signals. This book treats the mathematical tools, the models themselves, and the computational algorithms for applying statistics to analyze six representative classes of signals of increasing complexity. The book covers patterns in text, sound

  9. Modulation of EEG Theta Band Signal Complexity by Music Therapy

    Science.gov (United States)

    Bhattacharya, Joydeep; Lee, Eun-Jeong

    The primary goal of this study was to investigate the impact of monochord (MC) sounds, a type of archaic sounds used in music therapy, on the neural complexity of EEG signals obtained from patients undergoing chemotherapy. The secondary goal was to compare the EEG signal complexity values for monochords with those for progressive muscle relaxation (PMR), an alternative therapy for relaxation. Forty cancer patients were randomly allocated to one of the two relaxation groups, MC and PMR, over a period of six months; continuous EEG signals were recorded during the first and last sessions. EEG signals were analyzed by applying signal mode complexity, a measure of complexity of neuronal oscillations. Across sessions, both groups showed a modulation of complexity of beta-2 band (20-29Hz) at midfrontal regions, but only MC group showed a modulation of complexity of theta band (3.5-7.5Hz) at posterior regions. Therefore, the neuronal complexity patterns showed different changes in EEG frequency band specific complexity resulting in two different types of interventions. Moreover, the different neural responses to listening to monochords and PMR were observed after regular relaxation interventions over a short time span.

  10. Cycling and sounds : the impact of the use of electronic devices on cycling safety.

    NARCIS (Netherlands)

    Stelling-Konczak, A. Hagenzieker, M.P. & Wee, B. van

    2015-01-01

    The role of auditory perception of traffic sounds has often been stressed, especially for vulnerable road users such as cyclists or (visually impaired) pedestrians. This often in relation to two growing trends feared to negatively affect the use of auditory signals by road users: popularity of

  11. Cycling and sounds : the impact of the use of electronic devices on cycling safety.

    NARCIS (Netherlands)

    Stelling-Konczak, A. Hagenzieker, M. & Wee, B. van

    2013-01-01

    The role of auditory perception of traffic sounds has often been stressed, especially for vulnerable road users such as cyclists or (visually impaired) pedestrians. This often in relation to two growing trends feared to negatively affect the use of auditory signals by road users: popularity of

  12. Cycling and sounds : The impact of the use of electronic devices on cycling safety

    NARCIS (Netherlands)

    Stelling-Konczak, A.; Hagenzieker, M.P.; Van Wee, G.P.

    2013-01-01

    The role of auditory perception of traffic sounds has often been stressed, especially for vulnerable road users such as cyclists or (visually impaired) pedestrians. This often in relation to two growing trends feared to negatively affect the use of auditory signals by road users: popularity of

  13. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  14. UAV-based Radar Sounding of Antarctic Ice

    Science.gov (United States)

    Leuschen, Carl; Yan, Jie-Bang; Mahmood, Ali; Rodriguez-Morales, Fernando; Hale, Rick; Camps-Raga, Bruno; Metz, Lynsey; Wang, Zongbo; Paden, John; Bowman, Alec; Keshmiri, Shahriar; Gogineni, Sivaprasad

    2014-05-01

    We developed a compact radar for use on a small UAV to conduct measurements over the ice sheets in Greenland and Antarctica. It operates at center frequencies of 14 and 35 MHz with bandwidths of 1 MHz and 4 MHz, respectively. The radar weighs about 2 kgs and is housed in a box with dimensions of 20.3 cm x 15.2 cm x 13.2 cm. It transmits a signal power of 100 W at a pulse repletion frequency of 10 kHz and requires average power of about 20 W. The antennas for operating the radar are integrated into the wings and airframe of a small UAV with a wingspan of 5.3 m. We selected the frequencies of 14 and 35 MHz based on previous successful soundings of temperate ice in Alaska with a 12.5 MHz impulse radar [Arcone, 2002] and temperate glaciers in Patagonia with a 30 MHz monocycle radar [Blindow et al., 2012]. We developed the radar-equipped UAV to perform surveys over a 2-D grid, which allows us to synthesize a large two-dimensional aperture and obtain fine resolution in both the along- and cross-track directions. Low-frequency, high-sensitivity radars with 2-D aperture synthesis capability are needed to overcome the surface and volume scatter that masks weak echoes from the ice-bed interface of fast-flowing glaciers. We collected data with the radar-equipped UAV on sub-glacial ice near Lake Whillans at both 14 and 35 MHz. We acquired data to evaluate the concept of 2-D aperture synthesis and successfully demonstrated the first successful sounding of ice with a radar on an UAV. We are planning to build multiple radar-equipped UAVs for collecting fine-resolution data near the grounding lines of fast-flowing glaciers. In this presentation we will provide a brief overview of the radar and UAV, as well as present results obtained at both 14 and 35 MHz. Arcone, S. 2002. Airborne-radar stratigraphy and electrical structure of temperate firn: Bagley Ice Field, Alaska, U.S.A. Journal of Glaciology, 48, 317-334. Blindow, N., C. Salat, and G. Casassa. 2012. Airborne GPR sounding of

  15. Brain responses to sound intensity changes dissociate depressed participants and healthy controls.

    Science.gov (United States)

    Ruohonen, Elisa M; Astikainen, Piia

    2017-07-01

    Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  17. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    a health effect has not been documented. In this context, ANSES recommends: Concerning studies and research: - verifying whether or not there is a possible mechanism modulating the perception of audible sound at intensities of infra-sound similar to those measured from local residents; - studying the effects of the amplitude modulation of the acoustic signal on the noise-related disturbance felt; - studying the assumption that cochlea-vestibular effects may be responsible for pathophysiological effects; - undertaking a survey of residents living near wind farms enabling the identification of an objective signature of a physiological effect. Concerning information for local residents and the monitoring of noise levels: - enhancing information for local residents during the construction of wind farms and participation in public inquiries undertaken in rural areas; - systematically measuring the noise emissions of wind turbines before and after they are brought into service; - setting up, especially in the event of controversy, continuous noise measurement systems around wind farms (based on experience at airports, for example). Lastly, the Agency reiterates that the current regulations state that the distance between a wind turbine and the first home should be evaluated on a case-by-case basis, taking the conditions of wind farms into account. This distance, of at least 500 metres, may be increased further to the results of an impact assessment, in order to comply with the limit values for noise exposure. Current knowledge of the potential health effects of exposure to infra-sounds and low-frequency noise provides no justification for changing the current limit values or for extending the spectrum of noise currently taken into consideration

  18. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  19. Filamin and phospholipase C-ε are required for calcium signaling in the Caenorhabditis elegans spermatheca.

    Directory of Open Access Journals (Sweden)

    Ismar Kovacevic

    2013-05-01

    Full Text Available The Caenorhabditis elegans spermatheca is a myoepithelial tube that stores sperm and undergoes cycles of stretching and constriction as oocytes enter, are fertilized, and exit into the uterus. FLN-1/filamin, a stretch-sensitive structural and signaling scaffold, and PLC-1/phospholipase C-ε, an enzyme that generates the second messenger IP3, are required for embryos to exit normally after fertilization. Using GCaMP, a genetically encoded calcium indicator, we show that entry of an oocyte into the spermatheca initiates a distinctive series of IP3-dependent calcium oscillations that propagate across the tissue via gap junctions and lead to constriction of the spermatheca. PLC-1 is required for the calcium release mechanism triggered by oocyte entry, and FLN-1 is required for timely initiation of the calcium oscillations. INX-12, a gap junction subunit, coordinates propagation of the calcium transients across the spermatheca. Gain-of-function mutations in ITR-1/IP3R, an IP3-dependent calcium channel, and loss-of-function mutations in LFE-2, a negative regulator of IP3 signaling, increase calcium release and suppress the exit defect in filamin-deficient animals. We further demonstrate that a regulatory cassette consisting of MEL-11/myosin phosphatase and NMY-1/non-muscle myosin is required for coordinated contraction of the spermatheca. In summary, this study answers long-standing questions concerning calcium signaling dynamics in the C. elegans spermatheca and suggests FLN-1 is needed in response to oocyte entry to trigger calcium release and coordinated contraction of the spermathecal tissue.

  20. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  1. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  2. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  3. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  4. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  5. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  6. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  7. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  8. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  9. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  10. Analysis of the HVAC system's sound quality using the design of experiments

    International Nuclear Information System (INIS)

    Park, Sang Gil; Sim, Hyun Jin; Yoon, Ji Hyun; Jeong, Jae Eun; Choi, Byoung Jae; Oh, Jae Eung

    2009-01-01

    Human hearing is very sensitive to sound, so a subjective index of sound quality is required. Each situation of sound evaluation is composed of Sound Quality (SQ) metrics. When substituting the level of one frequency band, we could not see the tendency of substitution at the whole frequency band during SQ evaluation. In this study, the Design of Experiments (DOE) is used to analyze noise from an automotive Heating, Ventilating, and Air Conditioning (HVAC) system. The frequency domain is divided into 12 equal parts, and each level of the domain is given an increase or decrease due to the change in frequency band based on the 'loud' and 'sharp' sound of the SQ analyzed. By using DOE, the number of tests is effectively reduced by the number of experiments, and the main result is a solution at each band. SQ in terms of the 'loud' and 'sharp' sound at each band, the change in band (increase or decrease in sound pressure) or no change in band will have the most effect on the identifiable characteristics of SQ. This will enable us to select the objective frequency band. Through the results obtained, the physical level changes in arbitrary frequency domain sensitivity can be determined

  11. 78 FR 66940 - Regulatory Requirements for Hearing Aid Devices and Personal Sound Amplification Products; Draft...

    Science.gov (United States)

    2013-11-07

    ... for such products. These inconsistent interpretations of the definitions may inadvertently result in... amplification products (PSAPs), as well as the regulatory controls that apply to each. This draft guidance is... of clarity regarding how the Agency defines a hearing aid versus a personal sound amplification...

  12. Phoneme Compression: processing of the speech signal and effects on speech intelligibility in hearing-Impaired listeners

    NARCIS (Netherlands)

    A. Goedegebure (Andre)

    2005-01-01

    textabstractHearing-aid users often continue to have problems with poor speech understanding in difficult acoustical conditions. Another generally accounted problem is that certain sounds become too loud whereas other sounds are still not audible. Dynamic range compression is a signal processing

  13. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  14. Detecting regular sound changes in linguistics as events of concerted evolution.

    Science.gov (United States)

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  16. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  17. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  18. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Science.gov (United States)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  19. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Directory of Open Access Journals (Sweden)

    A. Neska

    2018-03-01

    Full Text Available This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding. The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  20. Sound field simulation and acoustic animation in urban squares

    Science.gov (United States)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  1. Complex coevolution of wing, tail, and vocal sounds of courting male bee hummingbirds.

    Science.gov (United States)

    Clark, Christopher J; McGuire, Jimmy A; Bonaccorso, Elisa; Berv, Jacob S; Prum, Richard O

    2018-03-01

    Phenotypic characters with a complex physical basis may have a correspondingly complex evolutionary history. Males in the "bee" hummingbird clade court females with sound from tail-feathers, which flutter during display dives. On a phylogeny of 35 species, flutter sound frequency evolves as a gradual, continuous character on most branches. But on at least six internal branches fall two types of major, saltational changes: mode of flutter changes, or the feather that is the sound source changes, causing frequency to jump from one discrete value to another. In addition to their tail "instruments," males also court females with sound from their syrinx and wing feathers, and may transfer or switch instruments over evolutionary time. In support of this, we found a negative phylogenetic correlation between presence of wing trills and singing. We hypothesize this transference occurs because wing trills and vocal songs serve similar functions and are thus redundant. There are also three independent origins of self-convergence of multiple signals, in which the same species produces both a vocal (sung) frequency sweep, and a highly similar nonvocal sound. Moreover, production of vocal, learned song has been lost repeatedly. Male bee hummingbirds court females with a diverse, coevolving array of acoustic traits. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  2. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  3. Wheeze sound analysis using computer-based techniques: a systematic review.

    Science.gov (United States)

    Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian

    2017-10-31

    Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.

  4. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  5. Dual-frequency radio soundings of planetary ionospheres avoid misinterpretations of ionospheric features

    Science.gov (United States)

    Paetzold, M.; Andert, T.; Bird, M. K.; Häusler, B.; Hinson, D. P.; Peter, K.; Tellmann, S.

    2017-12-01

    Planetary ionospheres are usually sounded at single frequency, e.g. S-band or X-band, or at dual-frequencies, e.g. simultaneous S-band and X-band frequencies. The differential Doppler is computed from the received dual-frequency sounding and it has the advantage that any residual motion by the spaceraft body is compensated. The electron density profile is derived from the propagation of the two radio signals through the ionospheric plasma. Vibrational motion of small amplitude by the spacecraft body may still be contained in the single frequency residuals and may be translated into electron densities. Examples from Mars Express and Venus Express shall be presented. Cases from other missions shall be presented where wave-like structures in the upper ionosphere may be a misinterpretation.

  6. Puget Sound area electric reliability plan. Draft environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power & Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound`s power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  7. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    Science.gov (United States)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.

  8. Texture-dependent effects of pseudo-chewing sound on perceived food texture and evoked feelings in response to nursing care foods.

    Science.gov (United States)

    Endo, Hiroshi; Ino, Shuichi; Fujisaki, Waka

    2017-09-01

    Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  10. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  11. Intracellular Zn(2+) signaling in the dentate gyrus is required for object recognition memory.

    Science.gov (United States)

    Takeda, Atsushi; Tamano, Haruna; Ogawa, Taisuke; Takada, Shunsuke; Nakamura, Masatoshi; Fujii, Hiroaki; Ando, Masaki

    2014-11-01

    The role of perforant pathway-dentate granule cell synapses in cognitive behavior was examined focusing on synaptic Zn(2+) signaling in the dentate gyrus. Object recognition memory was transiently impaired when extracellular Zn(2+) levels were decreased by injection of clioquinol and N,N,N',N'-tetrakis-(2-pyridylmethyl) ethylendediamine. To pursue the effect of the loss and/or blockade of Zn(2+) signaling in dentate granule cells, ZnAF-2DA (100 pmol, 0.1 mM/1 µl), an intracellular Zn(2+) chelator, was locally injected into the dentate molecular layer of rats. ZnAF-2DA injection, which was estimated to chelate intracellular Zn(2+) signaling only in the dentate gyrus, affected object recognition memory 1 h after training without affecting intracellular Ca(2+) signaling in the dentate molecular layer. In vivo dentate gyrus long-term potentiation (LTP) was affected under the local perfusion of the recording region (the dentate granule cell layer) with 0.1 mM ZnAF-2DA, but not with 1-10 mM CaEDTA, an extracellular Zn(2+) chelator, suggesting that the blockade of intracellular Zn(2+) signaling in dentate granule cells affects dentate gyrus LTP. The present study demonstrates that intracellular Zn(2+) signaling in the dentate gyrus is required for object recognition memory, probably via dentate gyrus LTP expression. Copyright © 2014 Wiley Periodicals, Inc.

  12. Application of Clustering Techniques for Lung Sounds to Improve Interpretability and Detection of Crackles

    Directory of Open Access Journals (Sweden)

    Germán D. Sosa

    2015-01-01

    Full Text Available Due to the subjectivity involved currently in pulmonary auscultation process and its diagnostic to evaluate the condition of respiratory airways, this work pretends to evaluate the performance of clustering algorithms such as k-means and DBSCAN to perform a computational analysis of lung sounds aiming to visualize a representation of such sounds that highlights the presence of crackles and the energy associated with them. In order to achieve that goal, Wavelet analysis techniques were used in contrast to traditional frequency analysis given the similarity between the typical waveform for a crackle and the wavelet sym4. Once the lung sound signal with isolated crackles is obtained, the clustering process groups crackles in regions of high density and provides visualization that might be useful for the diagnostic made by an expert. Evaluation suggests that k-means groups crackle more effective than DBSCAN in terms of generated clusters.

  13. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  14. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  15. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  16. Assessing and optimizing infra-sound networks to monitor volcanic eruptions

    International Nuclear Information System (INIS)

    Tailpied, Dorianne

    2016-01-01

    Understanding infra-sound signals is essential to monitor compliance with the Comprehensive Nuclear-Test ban Treaty, and also to demonstrate the potential of the global monitoring infra-sound network for civil and scientific applications. The main objective of this thesis is to develop a robust tool to estimate and optimize the performance of any infra-sound network to monitor explosive sources such as volcanic eruptions. Unlike previous studies, the developed method has the advantage to consider realistic atmospheric specifications along the propagation path, source frequency and noise levels at the stations. It allows to predict the attenuation and the minimum detectable source amplitude. By simulating the performances of any infra-sound networks, it is then possible to define the optimal configuration of the network to monitor a specific region, during a given period. When carefully adding a station to the existing network, performance can be improved by a factor of 2. However, it is not always possible to complete the network. A good knowledge of detection capabilities at large distances is thus essential. To provide a more realistic picture of the performance, we integrate the atmospheric longitudinal variability along the infra-sound propagation path in our simulations. This thesis also contributes in providing a confidence index taking into account the uncertainties related to propagation and atmospheric models. At high frequencies, the error can reach 40 dB. Volcanic eruptions are natural, powerful and valuable calibrating sources of infra-sound, worldwide detected. In this study, the well instrumented volcanoes Yasur, in Vanuatu, and Etna, in Italy, offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near-field recordings and far-field detections of these volcanoes have helped to highlight the potential of our simulation tool to remotely monitor volcanoes. Such work could significantly help to prevent

  17. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  18. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  19. Calibration aspects of binaural sound reproduction over insert earphones

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Markovic, Milos; Olesen, Søren Krarup

    2012-01-01

    Earphones are nowadays widely adopted for the reproduction of audio material in mobile multimedia and communication platforms, e.g. smartphones. Reproduction of high-quality spatial sound on such platforms can dramatically improve their applicability, and since two channels are always available...... in earphone-based reproduction, binaural reproduction can be applied directly. This paper is concerned with the theoretical and practical aspects relevant to the correct reproduction of binaural signals over insert earphones. To this purpose, a theoretical model originally developed to explain the acoustic...

  20. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...