WorldWideScience

Sample records for sound signal tests

  1. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  2. Analysis of acoustic sound signal for ONB measurement

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, H. I.; Han, K. Y.; Chai, H. T.; Park, C.

    2003-01-01

    The onset of nucleate boiling (ONB) was measured in a test fuel bundle composed of several fuel element simulators (FES) by analysing the aquatic sound signals. In order measure ONBs, a hydrophone, a pre-amplifier, and a data acquisition system to acquire/process the aquatic signal was prepared. The acoustic signal generated in the coolant is converted to the current signal through the microphone. When the signal is analyzed in the frequency domain, each sound signal can be identified according to its origin of sound source. As the power is increased to a certain degree, a nucleate boiling is started. The frequent formation and collapse of the void bubbles produce sound signal. By measuring this sound signal one can pinpoint the ONB. Since the signal characteristics is identical for different mass flow rates, this method can be applicable for ascertaining ONB

  3. 33 CFR 67.20-10 - Sound signal.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signal. 67.20-10 Section 67... AIDS TO NAVIGATION ON ARTIFICIAL ISLANDS AND FIXED STRUCTURES Class âAâ Requirements § 67.20-10 Sound signal. (a) The owner of a Class “A” structure shall: (1) Install a sound signal that has a rated range...

  4. Design, development and test of the gearbox condition monitoring system using sound signal processing

    Directory of Open Access Journals (Sweden)

    M Zamani

    2016-09-01

    format and MATLAB R2014a software used for data processing. Data processing: Signal processing method in the frequency domain is used in order to reveal the defects. Fast Fourier Transform: Fast Fourier Transform FFT for application in electronic equipment specially analyzers have great importance. In this condition, sampling number is chosen exponentially as 2N which decreases the calculation volume significantly. Determination of defect kind of gearwheel using frequency spectrum analysis: In mentioned gearwheel, errors were generated synthetically. Defect kind of these errors was generated in separate gearwheels in order to investigate the defects more precisely and a gearwheel was considered as control gearwheel. Despite of this, the sound of all of the gearwheels in correct condition was stored. Results and Discussion Comparison of processed acoustic signals from gearwheels of gearbox in two correct and incorrect conditions was indicative of gearwheel involvement, frequency, their harmony and the changes resulted from defects. Gearwheel defect detection tests showed that at the speeds of 1496, 1050 and 749 rpm, investigated defects are recognizable with a comparison of the frequency spectrum of obtained signals in correct and incorrect conditions and according to the involvement frequency of gearwheel, its harmony and sided spectrum. Results of the frequency spectrum of signal analysis in speed of 1496 rpm pinion showed the defect of one tooth fracture in involvement frequency of gearwheels by 489, 350 and 249 Hz respectively which became apparent with a mentioned frequency domain increment. A worn tooth defect in a gearwheel was completely determinable as sided bands with equal distance around gearwheel involvement frequency in the signal frequency determination of the speeds of 1496 and 105 rpm pinion, but became a bit harder in less speeds. Investigation of frequency spectrum of acoustic signal resulted from gearwheel, is indicative of the ability of this method

  5. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  6. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    Science.gov (United States)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  7. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  8. Sound card based digital correlation detection of weak photoelectrical signals

    International Nuclear Information System (INIS)

    Tang Guanghui; Wang Jiangcheng

    2005-01-01

    A simple and low-cost digital correlation method is proposed to investigate weak photoelectrical signals, using a high-speed photodiode as detector, which is directly connected to a programmably triggered sound card analogue-to-digital converter and a personal computer. Two testing experiments, autocorrelation detection of weak flickering signals from a computer monitor under background of noisy outdoor stray light and cross-correlation measurement of the surface velocity of a motional tape, are performed, showing that the results are reliable and the method is easy to implement

  9. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    Science.gov (United States)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  10. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  12. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    International Nuclear Information System (INIS)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M

    2006-01-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions

  13. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M [Signal and Imaging Processing and Tele-Medicine Technology Research Group, Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)

    2006-04-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  14. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  15. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  16. Effect of sound on gap-junction-based intercellular signaling: Calcium waves under acoustic irradiation.

    Science.gov (United States)

    Deymier, P A; Swinteck, N; Runge, K; Deymier-Black, A; Hoying, J B

    2015-01-01

    We present a previously unrecognized effect of sound waves on gap-junction-based intercellular signaling such as in biological tissues composed of endothelial cells. We suggest that sound irradiation may, through temporal and spatial modulation of cell-to-cell conductance, create intercellular calcium waves with unidirectional signal propagation associated with nonconventional topologies. Nonreciprocity in calcium wave propagation induced by sound wave irradiation is demonstrated in the case of a linear and a nonlinear reaction-diffusion model. This demonstration should be applicable to other types of gap-junction-based intercellular signals, and it is thought that it should be of help in interpreting a broad range of biological phenomena associated with the beneficial therapeutic effects of sound irradiation and possibly the harmful effects of sound waves on health.

  17. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  18. Examination of optimum test conditions for a 3-point bending and cutting test to evaluate sound emission of wafer during deformation

    Directory of Open Access Journals (Sweden)

    Erdem Carsanba

    2018-04-01

    Full Text Available The purpose of this study was to investigate optimum test conditions of acoustical-mechanical measurement of wafer analysed by Acoustic Envelope Detector attached to the Texture Analyser. Force-displacement and acoustic signals were simultaneously recorded applying two different methods (3-point bending and cutting test. In order to study acoustical-mechanical behaviour of wafers, the parameters “maximum sound pressure”, “total count peaks” and “mean sound value” were used and optimal test conditions of microphone position and test speed were examined. With a microphone position of 45° angle and 1 cm distance and at a low test speed of 0.5 mm/s wafers of different quality could be distinguished best. The angle of microphone did not have significant effect on acoustic results and the number of peaks of the force and acoustic signal decreased with increasing distance and test speed.

  19. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  20. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  1. Root phonotropism: Early signalling events following sound perception in Arabidopsis roots.

    Science.gov (United States)

    Rodrigo-Moreno, Ana; Bazihizina, Nadia; Azzarello, Elisa; Masi, Elisa; Tran, Daniel; Bouteau, François; Baluska, Frantisek; Mancuso, Stefano

    2017-11-01

    Sound is a fundamental form of energy and it has been suggested that plants can make use of acoustic cues to obtain information regarding their environments and alter and fine-tune their growth and development. Despite an increasing body of evidence indicating that it can influence plant growth and physiology, many questions concerning the effect of sound waves on plant growth and the underlying signalling mechanisms remains unknown. Here we show that in Arabidopsis thaliana, exposure to sound waves (200Hz) for 2 weeks induced positive phonotropism in roots, which grew towards to sound source. We found that sound waves triggered very quickly (within  minutes) an increase in cytosolic Ca 2+ , possibly mediated by an influx through plasma membrane and a release from internal stock. Sound waves likewise elicited rapid reactive oxygen species (ROS) production and K + efflux. Taken together these results suggest that changes in ion fluxes (Ca 2+ and K + ) and an increase in superoxide production are involved in sound perception in plants, as previously established in animals. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    Science.gov (United States)

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  3. 33 CFR 81.20 - Lights and sound signal appliances.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Lights and sound signal appliances. 81.20 Section 81.20 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY... appliances. Each vessel under the 72 COLREGS, except the vessels of the Navy, is exempt from the requirements...

  4. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  5. Stochastic Signal Processing for Sound Environment System with Decibel Evaluation and Energy Observation

    Directory of Open Access Journals (Sweden)

    Akira Ikuta

    2014-01-01

    Full Text Available In real sound environment system, a specific signal shows various types of probability distribution, and the observation data are usually contaminated by external noise (e.g., background noise of non-Gaussian distribution type. Furthermore, there potentially exist various nonlinear correlations in addition to the linear correlation between input and output time series. Consequently, often the system input and output relationship in the real phenomenon cannot be represented by a simple model using only the linear correlation and lower order statistics. In this study, complex sound environment systems difficult to analyze by using usual structural method are considered. By introducing an estimation method of the system parameters reflecting correlation information for conditional probability distribution under existence of the external noise, a prediction method of output response probability for sound environment systems is theoretically proposed in a suitable form for the additive property of energy variable and the evaluation in decibel scale. The effectiveness of the proposed stochastic signal processing method is experimentally confirmed by applying it to the observed data in sound environment systems.

  6. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  7. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  8. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  9. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2015-09-01

    Full Text Available Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  10. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  11. Assessing signal-driven mechanism in neonates: brain responses to temporally and spectrally different sounds

    Directory of Open Access Journals (Sweden)

    Yasuyo eMinagawa-Kawai

    2011-06-01

    Full Text Available Past studies have found that in adults that acoustic properties of sound signals (such as fast vs. slow temporal features differentially activate the left and right hemispheres, and some have hypothesized that left-lateralization for speech processing may follow from left-lateralization to rapidly changing signals. Here, we tested whether newborns’ brains show some evidence of signal-specific lateralization responses using near-infrared spectroscopy (NIRS and auditory stimuli that elicits lateralized responses in adults, composed of segments that vary in duration and spectral diversity. We found significantly greater bilateral responses of oxygenated hemoglobin (oxy-Hb in the temporal areas for stimuli with a minimum segment duration of 21 ms, than stimuli with a minimum segment duration of 667 ms. However, we found no evidence for hemispheric asymmetries dependent on the stimulus characteristics. We hypothesize that acoustic-based functional brain asymmetries may develop throughout early infancy, and discuss their possible relationship with brain asymmetries for language.

  12. Vibrotactile Identification of Signal-Processed Sounds from Environmental Events Presented by a Portable Vibrator: A Laboratory Study

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: To evaluate different signal-processing algorithms for tactile identification of environmental sounds in a monitoring aid for the deafblind. Two men and three women, sensorineurally deaf or profoundly hearing impaired with experience of vibratory experiments, age 22-36 years. Methods: A closed set of 45 representative environmental sounds were processed using two transposing (TRHA, TR1/3 and three modulating algorithms (AM, AMFM, AMMC and presented as tactile stimuli using a portable vibrator in three experiments. The algorithms TRHA, TR1/3, AMFM and AMMC had two alternatives (with and without adaption to vibratory thresholds. In Exp. 1, the sounds were preprocessed and directly fed to the vibrator. In Exp. 2 and 3, the sounds were presented in an acoustic test room, without or with background noise (SNR=+5 dB, and processed in real time. Results: In Exp. 1, Algorithm AMFM and AMFM(A consistently had the lowest identification scores, and were thus excluded in Exp. 2 and 3. TRHA, AM, AMMC, and AMMC(A showed comparable identification scores (30%-42% and the addition of noise did not deteriorate the performance. Discussion: Algorithm TRHA, AM, AMMC, and AMMC(A showed good performance in all three experiments and were robust in noise they can therefore be used in further testing in real environments.

  13. Vibrotactile Detection, Identification and Directional Perception of signal-Processed Sounds from Environmental Events: A Pilot Field Evaluation in Five Cases

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: Conducting field tests of a vibrotactile aid for deaf/deafblind persons for detection, identification and directional perception of environmental sounds. Methods: Five deaf (3F/2M, 22–36 years individuals tested the aid separately in a home environment (kitchen and in a traffic environment. Their eyes were blindfolded and they wore a headband and holding a vibrator for sound identification. In the headband, three microphones were mounted and two vibrators for signalling direction of the sound source. The sounds originated from events typical for the home environment and traffic. The subjects were inexperienced (events unknown and experienced (events known. They identified the events in a home and traffic environment, but perceived sound source direction only in traffic. Results: The detection scores were higher than 98% both in the home and in the traffic environment. In the home environment, identification scores varied between 25%-58% when the subjects were inexperienced and between 33%-83% when they were experienced. In traffic, identification scores varied between 20%-40% when the subjects were inexperienced and between 22%-56% when they were experienced. The directional perception scores varied between 30%-60% when inexperienced and between 61%-83% when experienced. Discussion: The vibratory aid consistently improved all participants’ detection, identification and directional perception ability.

  14. Monitoring of aquifer pump tests with Magnetic Resonance Sounding (MRS): a synthetic case study

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Auken, E.; Bauer-Gottwein, Peter

    2011-01-01

    Magnetic Resonance Sounding (MRS) can provide valuable data to constrain and calibrate groundwater flow and transport models. With this non-invasive geophysical technique, measurements of water content and hydraulic conductivity can be obtained. We developed a hydrogeophyiscal forward method, which...... calculates the MRS-signal generated by an aquifer pump test. A synthetic MRS-dataset was subsequently used to determine the hydrogeological parameters in an inverse parameter estimation approach. This was done for a virtual pump test with a partially and a fully penetrating well. With the MRS data we were...

  15. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    Science.gov (United States)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  16. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  17. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  18. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  19. Digital servo control of random sound fields

    Science.gov (United States)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  20. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  1. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  2. Plant acoustics: in the search of a sound mechanism for sound signaling in plants.

    Science.gov (United States)

    Mishra, Ratnesh Chandra; Ghosh, Ritesh; Bae, Hanhong

    2016-08-01

    Being sessile, plants continuously deal with their dynamic and complex surroundings, identifying important cues and reacting with appropriate responses. Consequently, the sensitivity of plants has evolved to perceive a myriad of external stimuli, which ultimately ensures their successful survival. Research over past centuries has established that plants respond to environmental factors such as light, temperature, moisture, and mechanical perturbations (e.g. wind, rain, touch, etc.) by suitably modulating their growth and development. However, sound vibrations (SVs) as a stimulus have only started receiving attention relatively recently. SVs have been shown to increase the yields of several crops and strengthen plant immunity against pathogens. These vibrations can also prime the plants so as to make them more tolerant to impending drought. Plants can recognize the chewing sounds of insect larvae and the buzz of a pollinating bee, and respond accordingly. It is thus plausible that SVs may serve as a long-range stimulus that evokes ecologically relevant signaling mechanisms in plants. Studies have suggested that SVs increase the transcription of certain genes, soluble protein content, and support enhanced growth and development in plants. At the cellular level, SVs can change the secondary structure of plasma membrane proteins, affect microfilament rearrangements, produce Ca(2+) signatures, cause increases in protein kinases, protective enzymes, peroxidases, antioxidant enzymes, amylase, H(+)-ATPase / K(+) channel activities, and enhance levels of polyamines, soluble sugars and auxin. In this paper, we propose a signaling model to account for the molecular episodes that SVs induce within the cell, and in so doing we uncover a number of interesting questions that need to be addressed by future research in plant acoustics. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions

  3. Correlation between hypoosmotic swelling test and breeding soundness evaluation of adult Nelore bulls

    Directory of Open Access Journals (Sweden)

    Tamires Miranda Neto

    2011-07-01

    Full Text Available This study aimed at evaluating the relationship between physical and morphological semen features with the hypoosmotic swelling (HOS test in raw semen of adult Nelore bulls classified as sound and unsound for breeding. Two hundred and six Nelore bulls aging from 3-10 years old were subjected to breeding soundness examination. After physical and morphological semen examination, HOS test was done. After the breeding soundness examination, 94.2% of the bulls were classified as sound for breeding. There was no difference between the average scrotal circumference of bulls classified as sound and unsound for breeding (P>0.05, but there was difference between all semen physical and morphological aspects of bulls classified as sound and unsound for breeding (P>0.05, but there was no difference in the mean percentage of reactive spermatozoa to HOS test results both for sound (38.4±17.9 and unsound animals (39.5±16.4; P>0.05, with no Pearson correlation between the HOS test and variables. According to these results HOS test can not be used alone to predict the reproductive potential of adult Nelore bulls.

  4. Non-destructive testing of full-length bonded rock bolts based on HHT signal analysis

    Science.gov (United States)

    Shi, Z. M.; Liu, L.; Peng, M.; Liu, C. C.; Tao, F. J.; Liu, C. S.

    2018-04-01

    Full-length bonded rock bolts are commonly used in mining, tunneling and slope engineering because of their simple design and resistance to corrosion. However, the length of a rock bolt and grouting quality do not often meet the required design standards in practice because of the concealment and complexity of bolt construction. Non-destructive testing is preferred when testing a rock bolt's quality because of the convenience, low cost and wide detection range. In this paper, a signal analysis method for the non-destructive sound wave testing of full-length bonded rock bolts is presented, which is based on the Hilbert-Huang transform (HHT). First, we introduce the HHT analysis method to calculate the bolt length and identify defect locations based on sound wave reflection test signals, which includes decomposing the test signal via empirical mode decomposition (EMD), selecting the intrinsic mode functions (IMF) using the Pearson Correlation Index (PCI) and calculating the instantaneous phase and frequency via the Hilbert transform (HT). Second, six model tests are conducted using different grouting defects and bolt protruding lengths to verify the effectiveness of the HHT analysis method. Lastly, the influence of the bolt protruding length on the test signal, identification of multiple reflections from defects, bolt end and protruding end, and mode mixing from EMD are discussed. The HHT analysis method can identify the bolt length and grouting defect locations from signals that contain noise at multiple reflected interfaces. The reflection from the long protruding end creates an irregular test signal with many frequency peaks on the spectrum. The reflections from defects barely change the original signal because they are low energy, which cannot be adequately resolved using existing methods. The HHT analysis method can identify reflections from the long protruding end of the bolt and multiple reflections from grouting defects based on mutations in the instantaneous

  5. Equivalent threshold sound pressure levels for acoustic test signals of short duration

    DEFF Research Database (Denmark)

    Poulsen, Torben; Daugaard, Carsten

    1998-01-01

    . The measurements were performed with two types of headphones, Telephonics TDH-39 and Sennheiser HDA-200. The sound pressure levels were measured in an IEC 318 ear simulator with Type 1 adapter (a flat plate) and a conical ring. The audiometric methods used in the experiments were the ascending method (ISO 8253...

  6. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  7. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  8. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  9. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  10. Soundness confirmation through cold test of the system equipment of HTTR

    International Nuclear Information System (INIS)

    Ono, Masato; Shinohara, Masanori; Iigaki, Kazuhiko; Tochio, Daisuke; Nakagawa, Shigeaki; Shimazaki, Yosuke

    2014-01-01

    HTTR was established at the Oarai Research and Development Center of Japan Atomic Energy Agency, for the purpose of the establishment and upgrading of high-temperature gas-cooled reactor technology infrastructure. Currently, it performs a safety demonstration test in order to demonstrate the safety inherent in high-temperature gas-cooled reactor. After the Great East Japan Earthquake, it conducted confirmation test for the purpose of soundness survey of facilities and equipment, and it confirmed that the soundness of the equipment was maintained. After two years from the confirmation test, it has not been confirmed whether the function of dynamic equipment and the soundness such as the airtightness of pipes and containers are maintained after receiving the influence of damage or deterioration caused by aftershocks generated during two years or aging. To confirm the soundness of these facilities, operation under cold state was conducted, and the obtained plant data was compared with confirmation test data to evaluate the presence of abnormality. In addition, in order to confirm through cold test the damage due to aftershocks and degradation due to aging, the plant data to compare was supposed to be the confirmation test data, and the evaluation on abnormality of the plant data of machine starting time and normal operation data was performed. (A.O.)

  11. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  12. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  13. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  14. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  15. Fluctuations of radio occultation signals in sounding the Earth's atmosphere

    Directory of Open Access Journals (Sweden)

    V. Kan

    2018-02-01

    Full Text Available We discuss the relationships that link the observed fluctuation spectra of the amplitude and phase of signals used for the radio occultation sounding of the Earth's atmosphere, with the spectra of atmospheric inhomogeneities. Our analysis employs the approximation of the phase screen and of weak fluctuations. We make our estimates for the following characteristic inhomogeneity types: (1 the isotropic Kolmogorov turbulence and (2 the anisotropic saturated internal gravity waves. We obtain the expressions for the variances of the amplitude and phase fluctuations of radio occultation signals as well as their estimates for the typical parameters of inhomogeneity models. From the GPS/MET observations, we evaluate the spectra of the amplitude and phase fluctuations in the altitude interval from 4 to 25 km in the middle and polar latitudes. As indicated by theoretical and experimental estimates, the main contribution into the radio signal fluctuations comes from the internal gravity waves. The influence of the Kolmogorov turbulence is negligible. We derive simple relationships that link the parameters of internal gravity waves and the statistical characteristics of the radio signal fluctuations. These results may serve as the basis for the global monitoring of the wave activity in the stratosphere and upper troposphere.

  16. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  17. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  18. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  19. Sound preference test in animal models of addicts and phobias.

    Science.gov (United States)

    Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu

    2016-08-01

    Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.

  20. Can joint sound assess soft and hard endpoints of the Lachman test?: A preliminary study.

    Science.gov (United States)

    Hattori, Koji; Ogawa, Munehiro; Tanaka, Kazunori; Matsuya, Ayako; Uematsu, Kota; Tanaka, Yasuhito

    2016-05-12

    The Lachman test is considered to be a reliable physical examination for anterior cruciate ligament (ACL) injury. Patients with a damaged ACL demonstrate a soft endpoint feeling. However, examiners judge the soft and hard endpoints subjectively. The purpose of our study was to confirm objective performance of the Lachman test using joint auscultation. Human and porcine knee joints were examined. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of Lachman sound, the peak sound as the maximum relative amplitude (acoustic pressure) and its frequency were used. The mean Lachman peak sound for healthy volunteer knees was 86.9 ± 12.9 Hz in frequency and -40 ± 2.5 dB in acoustic pressure. The mean Lachman peak sound for intact porcine knees was 84.1 ± 9.4 Hz and -40.5 ± 1.7 dB. Porcine knees with ACL deficiency had a soft endpoint feeling during the Lachman test. The Lachman peak sounds of porcine knees with ACL deficiency were dispersed into four distinct groups, with center frequencies of around 40, 160, 450, and 1600. The Lachman peak sound was capable of assessing soft and hard endpoints of the Lachman test objectively.

  1. Multidimensional Approach to the Development of a Mandarin Chinese-Oriented Sound Test

    Science.gov (United States)

    Hung, Yu-Chen; Lin, Chun-Yi; Tsai, Li-Chiun; Lee, Ya-Jung

    2016-01-01

    Purpose: Because the Ling six-sound test is based on American English phonemes, it can yield unreliable results when administered to non-English speakers. In this study, we aimed to improve specifically the diagnostic palette for Mandarin Chinese users by developing an adapted version of the Ling six-sound test. Method: To determine the set of…

  2. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  3. Sound response of superheated drop bubble detectors to neutrons

    International Nuclear Information System (INIS)

    Gao Size; Chen Zhe; Liu Chao; Ni Bangfa; Zhang Guiying; Zhao Changfa; Xiao Caijin; Liu Cunxiong; Nie Peng; Guan Yongjing

    2012-01-01

    The sound response of the bubble detectors to neutrons by using 252 Cf neutron source was described. Sound signals were filtered by sound card and PC. The short-time signal energy. FFT spectrum, power spectrum, and decay time constant were got to determine the authenticity of sound signal for bubbles. (authors)

  4. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  5. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  6. Psychometric characteristics of single-word tests of children's speech sound production.

    Science.gov (United States)

    Flipsen, Peter; Ogiela, Diane A

    2015-04-01

    Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984). The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource that clinicians may use to help them make test selection decisions for their particular client populations. Ten tests published since 1990 were reviewed to determine whether they met the 10 criteria set out by McCauley and Swisher (1984), as well as 7 additional criteria. All of the tests reviewed met at least 3 of McCauley and Swisher's (1984) original criteria, and 9 of 10 tests met at least 5 of them. Most of the tests met some of the additional criteria as well. The state of the art for single-word tests of speech sound production in children appears to have improved in the last 30 years. There remains, however, room for improvement.

  7. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  8. Fetus Sound Stimulation: Cilia Memristor Effect of Signal Transduction

    Directory of Open Access Journals (Sweden)

    Svetlana Jankovic-Raznatovic

    2014-01-01

    Full Text Available Background. This experimental study evaluates fetal middle cerebral artery (MCA circulation after the defined prenatal acoustical stimulation (PAS and the role of cilia in hearing and memory and could explain signal transduction and memory according to cilia optical-acoustical properties. Methods. PAS was performed twice on 119 no-risk term pregnancies. We analyzed fetal MCA circulation before, after first and second PAS. Results. Analysis of the Pulsatility index basic (PIB and before PAS and Pulsatility index reactive after the first PAS (PIR 1 shows high statistical difference, representing high influence on the brain circulation. Analysis of PIB and Pulsatility index reactive after the second PAS (PIR 2 shows no statistical difference. Cilia as nanoscale structure possess magnetic flux linkage that depends on the amount of charge that has passed between two-terminal variable resistors of cilia. Microtubule resistance, as a function of the current through and voltage across the structure, leads to appearance of cilia memory with the “memristor” property. Conclusion. Acoustical and optical cilia properties play crucial role in hearing and memory processes. We suggest that fetuses are getting used to sound, developing a kind of memory patterns, considering acoustical and electromagnetically waves and involving cilia and microtubules and try to explain signal transduction.

  9. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  10. Arctic Ocean Model Intercomparison Using Sound Speed

    Science.gov (United States)

    Dukhovskoy, D. S.; Johnson, M. A.

    2002-05-01

    The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.

  11. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  12. From acoustic descriptors to evoked quality of car door sounds.

    Science.gov (United States)

    Bezat, Marie-Céline; Kronland-Martinet, Richard; Roussarie, Vincent; Ystad, Sølvi

    2014-07-01

    This article describes the first part of a study aiming at adapting the mechanical car door construction to the drivers' expectancies in terms of perceived quality of cars deduced from car door sounds. A perceptual cartography of car door sounds is obtained from various listening tests aiming at revealing both ecological and analytical properties linked to evoked car quality. In the first test naive listeners performed absolute evaluations of five ecological properties (i.e., solidity, quality, weight, closure energy, and success of closure). Then experts in the area of automobile doors categorized the sounds according to organic constituents (lock, joints, door panel), in particular whether or not the lock mechanism could be perceived. Further, a sensory panel of naive listeners identified sensory descriptors such as classical descriptors or onomatopoeia that characterize the sounds, hereby providing an analytic description of the sounds. Finally, acoustic descriptors were calculated after decomposition of the signal into a lock and a closure component by the Empirical Mode Decomposition (EMD) method. A statistical relationship between the acoustic descriptors and the perceptual evaluations of the car door sounds could then be obtained through linear regression analysis.

  13. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: METSAT A1 Signal Processor (P/N: 1331670-2, S/N: F04)

    Science.gov (United States)

    Lund, D.

    1998-01-01

    This report presents a description of the tests performed, and the test data, for the A1 METSAT Signal Processor Assembly PN: 1331679-2, S/N F04. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.

  14. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  15. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  16. [Mechanism of the constant representation of the position of a sound signal source by the cricket cercal system neurons].

    Science.gov (United States)

    Rozhkova, G I; Polishcuk, N A

    1976-01-01

    Previously it has been shown that some abdominal giant neurones of the cricket have constant preffered directions of sound stimulation in relation not to the cerci (the organs bearing sound receptors) but to the insect body (fig. 1) [1]. Now it is found that the independence of directional sensitivity of giant neurones on the cerci position disappears after cutting all structures connecting the cerci to the body (except cercal nerves) (fig 2). Therefore the constancy of directional sensitivity of the giant nerones is provided by proprioceptive signals about cerci position.

  17. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  18. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  19. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  20. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  1. The Sound Quality of Cochlear Implants: Studies With Single-sided Deaf Patients.

    Science.gov (United States)

    Dorman, Michael F; Natale, Sarah Cook; Butts, Austin M; Zeitler, Daniel M; Carlson, Matthew L

    2017-09-01

    The goal of the present study was to assess the sound quality of a cochlear implant for single-sided deaf (SSD) patients fit with a cochlear implant (CI). One of the fundamental, unanswered questions in CI research is "what does an implant sound like?" Conventional CI patients must use the memory of a clean signal, often decades old, to judge the sound quality of their CIs. In contrast, SSD-CI patients can rate the similarity of a clean signal presented to the CI ear and candidate, CI-like signals presented to the ear with normal hearing. For Experiment 1 four types of stimuli were created for presentation to the normal hearing ear: noise vocoded signals, sine vocoded signals, frequency shifted, sine vocoded signals and band-pass filtered, natural speech signals. Listeners rated the similarity of these signals to unmodified signals sent to the CI on a scale of 0 to 10 with 10 being a complete match to the CI signal. For Experiment 2 multitrack signal mixing was used to create natural speech signals that varied along multiple dimensions. In Experiment 1 for eight adult SSD-CI listeners, the best median similarity rating to the sound of the CI for noise vocoded signals was 1.9; for sine vocoded signals 2.9; for frequency upshifted signals, 1.9; and for band pass filtered signals, 5.5. In Experiment 2 for three young listeners, combinations of band pass filtering and spectral smearing lead to ratings of 10. The sound quality of noise and sine vocoders does not generally correspond to the sound quality of cochlear implants fit to SSD patients. Our preliminary conclusion is that natural speech signals that have been muffled to one degree or another by band pass filtering and/or spectral smearing provide a close, but incomplete, match to CI sound quality for some patients.

  2. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  3. Constraints on decay of environmental sound memory in adult rats.

    Science.gov (United States)

    Sakai, Masashi

    2006-11-27

    When adult rats are pretreated with a 48-h-long 'repetitive nonreinforced sound exposure', performance in two-sound discriminative operant conditioning transiently improves. We have already proven that this 'sound exposure-enhanced discrimination' is dependent upon enhancement of the perceptual capacity of the auditory cortex. This study investigated principles governing decay of sound exposure-enhanced discrimination decay. Sound exposure-enhanced discrimination disappeared within approximately 72 h if animals were deprived of environmental sounds after sound exposure, and that shortened to less than approximately 60 h if they were exposed to environmental sounds in the animal room. Sound-deprivation itself exerted no clear effects. These findings suggest that the memory of a passively exposed behaviorally irrelevant sound signal does not merely pass along the intrinsic lifetime but also gets deteriorated by other incoming signals.

  4. Monitoring of surface chemical and underground nuclear explosions with help of ionospheric radio-sounding above test site

    International Nuclear Information System (INIS)

    Krasnov, V.M.; Drobzheva, Ya.V.

    2000-01-01

    We describe the basic principles, advantages and disadvantages of ionospheric method to monitor surface chemical and underground nuclear explosions. The ionosphere is 'an apparatus' for the infra-sound measurements immediately above the test site. Using remote radio sounding of the ionosphere you can obtain that information. So you carry out the inspection at the test site. The main disadvantage of the ionospheric method is the necessity to sound the ionosphere with radio waves. (author)

  5. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  6. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  7. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  8. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  9. Testing Cosmology with Cosmic Sound Waves

    CERN Document Server

    Corasaniti, Pier Stefano

    2008-01-01

    WMAP observations have accurately determined the position of the first two peaks and dips in the CMB temperature power spectrum. These encode information on the ratio of the distance to the last scattering surface to the sound horizon at decoupling. However pre-recombination processes can contaminate this distance information. In order to assess the amplitude of these effects we use the WMAP data and evaluate the relative differences of the CMB peaks and dips multipoles. We find that the position of the first peak is largely displaced with the respect to the expected position of the sound horizon scale at decoupling. In contrast the relative spacings of the higher extrema are statistically consistent with those expected from perfect harmonic oscillations. This provides evidence for a scale dependent phase shift of the CMB oscillations which is caused by gravitational driving forces affecting the propagation of sound waves before recombination. By accounting for these effects we have performed a MCMC likelihoo...

  10. Device for precision measurement of speed of sound in a gas

    Science.gov (United States)

    Kelner, Eric; Minachi, Ali; Owen, Thomas E.; Burzynski, Jr., Marion; Petullo, Steven P.

    2004-11-30

    A sensor for measuring the speed of sound in a gas. The sensor has a helical coil, through which the gas flows before entering an inner chamber. Flow through the coil brings the gas into thermal equilibrium with the test chamber body. After the gas enters the chamber, a transducer produces an ultrasonic pulse, which is reflected from each of two faces of a target. The time difference between the two reflected signals is used to determine the speed of sound in the gas.

  11. Alternative Paths to Hearing (A Conjecture. Photonic and Tactile Hearing Systems Displaying the Frequency Spectrum of Sound

    Directory of Open Access Journals (Sweden)

    E. H. Hara

    2006-01-01

    Full Text Available In this article, the hearing process is considered from a system engineering perspective. For those with total hearing loss, a cochlear implant is the only direct remedy. It first acts as a spectrum analyser and then electronically stimulates the neurons in the cochlea with a number of electrodes. Each electrode carries information on the separate frequency bands (i.e., spectrum of the original sound signal. The neurons then relay the signals in a parallel manner to the section of the brain where sound signals are processed. Photonic and tactile hearing systems displaying the spectrum of sound are proposed as alternative paths to the section of the brain that processes sound. In view of the plasticity of the brain, which can rewire itself, the following conjectures are offered. After a certain period of training, a person without the ability to hear should be able to decipher the patterns of photonic or tactile displays of the sound spectrum and learn to ‘hear’. This is very similar to the case of a blind person learning to ‘read’ by recognizing the patterns created by the series of bumps as their fingers scan the Braille writing. The conjectures are yet to be tested. Designs of photonic and tactile systems displaying the sound spectrum are outlined.

  12. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  13. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  14. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Time-domain electromagnetic soundings at the Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Frischknecht, F.C.; Raab, P.V.

    1984-01-01

    Structural discontinuities and variations in the resistivity of near-surface rocks often seriously distort dc resistivity and frequency-domain electromagnetic (FDEM) depth sounding curves. Reliable interpretation of such curves using one-dimensional (1-D) models is difficult or impossible. Short-offset time-domain electromagnetic (TDEM) sounding methods offer a number of advantages over other common geoelectrical sounding methods when working in laterally heterogeneous areas. In order to test the TDEM method in a geologically complex region, measurements were made on the east flank of Yucca Mountain at the Nevada Test Site (NTS). Coincident, offset coincident, single, and central loop configurations with square transmitting loops, either 305 or 152 m on a side, were used. Measured transient voltages were transformed into apparent resistivity values and then inverted in terms of 1-D models. Good fits to all of the offset coincident and single loop data were obtained using three-layer models. In most of the area, two well-defined interfaces were mapped, one which corresponds closely to a contact between stratigraphic units at a depth of about 400 m and another which corresponds to a transition from relatively unaltered to altered volcanic rocks at a depth of about 1000 m. In comparison with the results of a dipole-dipole resistivity survey, the results of the TDEM survey emphasize changes in the geoelectrical section with depth. Nonetheless, discontinuities in the layering mapped with the TDEM method delineated major faults or fault zones along the survey traverse. 5 refs., 10 figs., 1 tab

  16. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  17. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  18. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  19. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  20. Wing, tail, and vocal contributions to the complex acoustic signals of courting Calliope hummingbirds

    Directory of Open Access Journals (Sweden)

    Christopher James CLARK

    2011-04-01

    Full Text Available Multi-component signals contain multiple signal parts expressed in the same physical modality. One way to identify individual components is if they are produced by different physical mechanisms. Here, I studied the mechanisms generating acoustic signals in the courtship displays of the Calliope hummingbird Stellula calliope. Display dives consisted of three synchronized sound elements, a high-frequency tone (hft, a low frequency tone (lft, and atonal sound pulses (asp, which were then followed by a frequency-modulated fall. Manipulating any of the rectrices (tail-feathers of wild males impaired production of the lft and asp but not the hft or fall, which are apparently vocal. I tested the sound production capabilities of the rectrices in a wind tunnel. Single rectrices could generate the lft but not the asp, whereas multiple rectrices tested together produced sounds similar to the asp when they fluttered and collided with their neighbors percussively, representing a previously unknown mechanism of sound production. During the shuttle display, a trill is generated by the wings during pulses in which the wingbeat frequency is elevated to 95 Hz, 40% higher than the typical hovering wingbeat frequency. The Calliope hummingbird courtship displays include sounds produced by three independent mechanisms, and thus include a minimum of three acoustic signal components. These acoustic mechanisms have different constraints and thus potentially contain different messages. Producing multiple acoustic signals via multiple mechanisms may be a way to escape the constraints present in any single mechanism [Current Zoology 57 (2: 187–196, 2011].

  1. 40 CFR 205.54-1 - Low speed sound emission test procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Low speed sound emission test procedures. 205.54-1 Section 205.54-1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Medium and Heavy Trucks § 205...

  2. Car audio using DSP for active sound control. DSP ni yoru active seigyo wo mochiita audio

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, K.; Asano, S.; Furukawa, N. (Mitsubishi Motor Corp., Tokyo (Japan))

    1993-06-01

    In the automobile cabin, there are some unique problems which spoil the quality of sound reproduction from audio equipment, such as the narrow space and/or the background noise. The audio signal processing by using DSP (digital signal processor) makes enable a solution to these problems. A car audio with a high amenity has been successfully made by the active sound control using DSP. The DSP consists of an adder, coefficient multiplier, delay unit, and connections. For the actual processing by DSP, are used functions, such as sound field correction, response and processing of noises during driving, surround reproduction, graphic equalizer processing, etc. High effectiveness of the method was confirmed through the actual driving evaluation test. The present paper describes the actual method of sound control technology using DSP. Especially, the dynamic processing of the noise during driving is discussed in detail. 1 ref., 12 figs., 1 tab.

  3. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  4. Test signal generation for analog circuits

    Directory of Open Access Journals (Sweden)

    B. Burdiek

    2003-01-01

    Full Text Available In this paper a new test signal generation approach for general analog circuits based on the variational calculus and modern control theory methods is presented. The computed transient test signals also called test stimuli are optimal with respect to the detection of a given fault set by means of a predefined merit functional representing a fault detection criterion. The test signal generation problem of finding optimal test stimuli detecting all faults form the fault set is formulated as an optimal control problem. The solution of the optimal control problem representing the test stimuli is computed using an optimization procedure. The optimization procedure is based on the necessary conditions for optimality like the maximum principle of Pontryagin and adjoint circuit equations.

  5. Effect of echolocation behavior-related constant frequency-frequency modulation sound on the frequency tuning of inferior collicular neurons in Hipposideros armiger.

    Science.gov (United States)

    Tang, Jia; Fu, Zi-Ying; Wei, Chen-Xue; Chen, Qi-Cai

    2015-08-01

    In constant frequency-frequency modulation (CF-FM) bats, the CF-FM echolocation signals include both CF and FM components, yet the role of such complex acoustic signals in frequency resolution by bats remains unknown. Using CF and CF-FM echolocation signals as acoustic stimuli, the responses of inferior collicular (IC) neurons of Hipposideros armiger were obtained by extracellular recordings. We tested the effect of preceding CF or CF-FM sounds on the shape of the frequency tuning curves (FTCs) of IC neurons. Results showed that both CF-FM and CF sounds reduced the number of FTCs with tailed lower-frequency-side of IC neurons. However, more IC neurons experienced such conversion after adding CF-FM sound compared with CF sound. We also found that the Q 20 value of the FTC of IC neurons experienced the largest increase with the addition of CF-FM sound. Moreover, only CF-FM sound could cause an increase in the slope of the neurons' FTCs, and such increase occurred mainly in the lower-frequency edge. These results suggested that CF-FM sound could increase the accuracy of frequency analysis of echo and cut-off low-frequency elements from the habitat of bats more than CF sound.

  6. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    Science.gov (United States)

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  7. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  8. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  9. An estimation method for echo signal energy of pipe inner surface longitudinal crack detection by 2-D energy coefficients integration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn; Cao, Xiandong, E-mail: redaple@bit.edu.cn; Cui, Liming, E-mail: redaple@bit.edu.cn; Xiao, Dingguo, E-mail: redaple@bit.edu.cn [School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China NO.5 Zhongguancun South Street, Haidian District, Beijing 100081 (China)

    2015-03-31

    The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.

  10. How male sound pressure level influences phonotaxis in virgin female Jamaican field crickets (Gryllus assimilis

    Directory of Open Access Journals (Sweden)

    Karen Pacheco

    2014-06-01

    Full Text Available Understanding female mate preference is important for determining the strength and direction of sexual trait evolution. The sound pressure level (SPL acoustic signalers use is often an important predictor of mating success because higher sound pressure levels are detectable at greater distances. If females are more attracted to signals produced at higher sound pressure levels, then the potential fitness impacts of signalling at higher sound pressure levels should be elevated beyond what would be expected from detection distance alone. Here we manipulated the sound pressure level of cricket mate attraction signals to determine how female phonotaxis was influenced. We examined female phonotaxis using two common experimental methods: spherical treadmills and open arenas. Both methods showed similar results, with females exhibiting greatest phonotaxis towards loud sound pressure levels relative to the standard signal (69 vs. 60 dB SPL but showing reduced phonotaxis towards very loud sound pressure level signals relative to the standard (77 vs. 60 dB SPL. Reduced female phonotaxis towards supernormal stimuli may signify an acoustic startle response, an absence of other required sensory cues, or perceived increases in predation risk.

  11. Anticipated Effectiveness of Active Noise Control in Propeller Aircraft Interiors as Determined by Sound Quality Tests

    Science.gov (United States)

    Powell, Clemans A.; Sullivan, Brenda M.

    2004-01-01

    Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.

  12. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  13. Are occlusal characteristics, headache, parafunctional habits and clicking sounds associated with the signs and symptoms of temporomandibular disorder in adolescents?

    Science.gov (United States)

    Lauriti, Leandro; Motta, Lara Jansiski; Silva, Paula Fernanda da Costa; Leal de Godoy, Camila Haddad; Alfaya, Thays Almeida; Fernandes, Kristianne Porta Santos; Mesquita-Ferrari, Raquel Agnelli; Bussadori, Sandra Kalil

    2013-10-01

    [Purpose] To assess the association between the oclusal characteristics, headache, parafunctional habits and clicking sounds and signs/symptoms of TMD in adolescents. [Subjects] Adolescents between 14 and 18 years of age. [Methods] The participants were evaluated using the Helkimo Index and a clinical examination to track clicking sounds, parafunctional habits and other signs/symptoms of temporomandibular disorder (TMD). Subjects were classified according to the presence or absence of headache, type of occlusion, facial pattern and type of bite. In statistical analyse we used the chi-square test and Fisher's exact test, with a level of significance of 5%. [Results] The sample was made up of 81 adolescents with a mean age of 15.64 years; 51.9% were male. The prevalence of signals/symptoms of TMD was 74.1%, predominantly affecting females. Signals/symptoms of TMD were significantly associated with clicking sounds, headache and nail biting. No associations were found between signals/symptoms of TMD and angle classification, type of bite and facial pattern. [Conclusion] Headache is one of the most closely associated symptoms of TMD. Clicking sounds were found in the majority of cases. Therefore, the sum of two or more factors may be necessary for the onset and perpetuation of TMD.

  14. Detection test of wireless network signal strength and GPS positioning signal in underground pipeline

    Science.gov (United States)

    Li, Li; Zhang, Yunwei; Chen, Ling

    2018-03-01

    In order to solve the problem of selecting positioning technology for inspection robot in underground pipeline environment, the wireless network signal strength and GPS positioning signal testing are carried out in the actual underground pipeline environment. Firstly, the strength variation of the 3G wireless network signal and Wi-Fi wireless signal provided by China Telecom and China Unicom ground base stations are tested, and the attenuation law of these wireless signals along the pipeline is analyzed quantitatively and described. Then, the receiving data of the GPS satellite signal in the pipeline are tested, and the attenuation of GPS satellite signal under underground pipeline is analyzed. The testing results may be reference for other related research which need to consider positioning in pipeline.

  15. Sound field control for a low-frequency test facility

    DEFF Research Database (Denmark)

    Pedersen, Christian Sejer; Møller, Henrik

    2013-01-01

    The two largest problems in controlling the reproduction of low-frequency sound for psychoacoustic experiments is the effect of the room due to standing waves and the relatively large sound pressure levels needed. Anechoic rooms are limited downward in frequency and distortion may be a problem even...... at moderate levels, while pressure-field playback can give higher sound pressures but is limited upwards in frequency. A new solution that addresses both problems has been implemented in the laboratory of Acoustics, Aalborg University. The solution uses one wall with 20 loudspeakers to generate a plane wave...... that is actively absorbed when it reaches the 20 loudspeakers on the opposing wall. This gives a homogeneous sound field in the majority of the room with a flat frequency response in the frequency range 2-300 Hz. The lowest frequencies are limited to sound pressure levels in the order of 95 dB. If larger levels...

  16. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation

    International Nuclear Information System (INIS)

    Crostack, H.A.; Pohl, K.Y.; Radtke, U.

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO 2 layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.) [de

  17. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    Science.gov (United States)

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3

  18. Exploring the perceived harshness of cello sounds by morphing and synthesis techniques.

    Science.gov (United States)

    Rozé, Jocelyn; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi

    2017-03-01

    Cello bowing requires a very fine control of the musicians' gestures to ensure the quality of the perceived sound. When the interaction between the bow hair and the string is optimal, the sound is perceived as broad and round. On the other hand, when the gestural control becomes more approximate, the sound quality deteriorates and often becomes harsh, shrill, and quavering. In this study, such a timbre degradation, often described by French cellists as harshness (décharnement), is investigated from both signal and perceptual perspectives. Harsh sounds were obtained from experienced cellists subjected to a postural constraint. A signal approach based on Gabor masks enabled us to capture the main dissimilarities between round and harsh sounds. Two complementary methods perceptually validated these signal features: First, a predictive regression model of the perceived harshness was built from sound continua obtained by a morphing technique. Next, the signal structures identified by the model were validated within a perceptual timbre space, obtained by multidimensional scaling analysis on pairs of synthesized stimuli controlled in harshness. The results revealed that the perceived harshness was due to a combination between a more chaotic harmonic behavior, a formantic emergence, and a weaker attack slope.

  19. Acoustic cardiac signals analysis: a Kalman filter–based approach

    Directory of Open Access Journals (Sweden)

    Salleh SH

    2012-06-01

    Full Text Available Sheik Hussain Salleh,1 Hadrina Sheik Hussain,2 Tan Tian Swee,2 Chee-Ming Ting,2 Alias Mohd Noor,2 Surasak Pipatsart,3 Jalil Ali,4 Preecha P Yupapin31Department of Biomedical Instrumentation and Signal Processing, Universiti Teknologi Malaysia, Skudai, Malaysia; 2Centre for Biomedical Engineering Transportation Research Alliance, Universiti Teknologi Malaysia, Johor Bahru, Malaysia; 3Nanoscale Science and Engineering Research Alliance, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand; 4Institute of Advanced Photonics Science, Universiti Teknologi Malaysia, Johor Bahru, MalaysiaAbstract: Auscultation of the heart is accompanied by both electrical activity and sound. Heart auscultation provides clues to diagnose many cardiac abnormalities. Unfortunately, detection of relevant symptoms and diagnosis based on heart sound through a stethoscope is difficult. The reason GPs find this difficult is that the heart sounds are of short duration and separated from one another by less than 30 ms. In addition, the cost of false positives constitutes wasted time and emotional anxiety for both patient and GP. Many heart diseases cause changes in heart sound, waveform, and additional murmurs before other signs and symptoms appear. Heart-sound auscultation is the primary test conducted by GPs. These sounds are generated primarily by turbulent flow of blood in the heart. Analysis of heart sounds requires a quiet environment with minimum ambient noise. In order to address such issues, the technique of denoising and estimating the biomedical heart signal is proposed in this investigation. Normally, the performance of the filter naturally depends on prior information related to the statistical properties of the signal and the background noise. This paper proposes Kalman filtering for denoising statistical heart sound. The cycles of heart sounds are certain to follow first-order Gauss–Markov process. These cycles are observed with additional noise

  20. 21 CFR 882.1430 - Electroencephalograph test signal generator.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Electroencephalograph test signal generator. 882.1430 Section 882.1430 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... Electroencephalograph test signal generator. (a) Identification. An electroencephalograph test signal generator is a...

  1. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  2. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  3. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  4. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.

    Science.gov (United States)

    Kirchberger, Martin; Russo, Frank A

    2016-02-01

    A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.

  5. Ultrasound sounding in air by fast-moving receiver

    Science.gov (United States)

    Sukhanov, D.; Erzakova, N.

    2018-05-01

    A method of ultrasound imaging in the air for a fast receiver. The case, when the speed of movement of the receiver can not be neglected with respect to the speed of sound. In this case, the Doppler effect is significant, making it difficult for matched filtering of the backscattered signal. The proposed method does not use a continuous repetitive noise-sounding signal. generalized approach applies spatial matched filtering in the time domain to recover the ultrasonic tomographic images.

  6. Rainforests as concert halls for birds: Are reverberations improving sound transmission of long song elements?

    DEFF Research Database (Denmark)

    Nemeth, Erwin; Dabelsteen, Torben; Pedersen, Simon Boel

    2006-01-01

    that longer sounds are less attenuated. The results indicate that higher sound pressure level is caused by superimposing reflections. It is suggested that this beneficial effect of reverberations explains interspecific birdsong differences in element length. Transmission paths with stronger reverberations......In forests reverberations have probably detrimental and beneficial effects on avian communication. They constrain signal discrimination by masking fast repetitive sounds and they improve signal detection by elongating sounds. This ambivalence of reflections for animal signals in forests is similar...... to the influence of reverberations on speech or music in indoor sound transmission. Since comparisons of sound fields of forests and concert halls have demonstrated that reflections can contribute in both environments a considerable part to the energy of a received sound, it is here assumed that reverberations...

  7. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian Maria; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multichannel reproduced sound. Ninety-one participants filled in a web-based questionnaire. Seventy-eight of them took part in an assessment of their hearing thresholds......, their spatial hearing, and their verbal production abilities. The listeners displayed large individual differences in their performance. Forty subjects were selected based on the test results. The self-assessed listening habits and experience in the web-questionnaire could not predict the results...... of the selection procedure. Further, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  8. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  9. Testing second order cyclostationarity in the squared envelope spectrum of non-white vibration signals

    Science.gov (United States)

    Borghesani, P.; Pennacchi, P.; Ricci, R.; Chatterton, S.

    2013-10-01

    Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.

  10. Determination of the mechanical thermostat electrical contacts switching quality with sound and vibration analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rejc, Jure; Munih, Marko [University of Ljubljana, Ljubljana (Slovenia)

    2017-05-15

    A mechanical thermostat is a device that switches heating or cooling appliances on or off based on temperature. For this kind of use, electronic or mechanical switching concepts are applied. During the production of electrical contacts, several irregularities can occur leading to improper switching events of the thermostat electrical contacts. This paper presents a non-obstructive method based on the fact that when the switching event occurs it can be heard and felt by human senses. We performed several laboratory tests with two different methods. The first method includes thermostat switch sound signal analysis during the switching event. The second method is based on sampling of the accelerometer signal during the switching event. The results show that the sound analysis approach has great potential. The approach enables an accurate determination of the switching event even if the sampled signal carries also the switching event of the neighbour thermostat.

  11. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  12. Sound Cross-synthesis and Morphing Using Dictionary-based Methods

    DEFF Research Database (Denmark)

    Collins, Nick; Sturm, Bob L.

    2011-01-01

    Dictionary-based methods (DBMs) provide rich possibilities for new sound transformations; as the analysis dual to granular synthesis, audio signals are decomposed into `atoms', allowing interesting manipulations. We present various approaches to audio signal cross-synthesis and cross-analysis via...... atomic decomposition using scale-time-frequency dictionaries. DBMs naturally provide high-level descriptions of a signal and its content, which can allow for greater control over what is modified and how. Through these models, we can make one signal decomposition influence that of another to create cross......-synthesized sounds. We present several examples of these techniques both theoretically and practically, and present on-going and further work....

  13. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  14. Developing a reference of normal lung sounds in healthy Peruvian children.

    Science.gov (United States)

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  15. Different Types of Sounds and Their Relationship With the Electrocardiographic Signals and the Cardiovascular System – Review

    Directory of Open Access Journals (Sweden)

    Ennio H. Idrobo-Ávila

    2018-05-01

    Full Text Available Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals.Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them.Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise.Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could

  16. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  17. Designing, Modeling, Constructing, and Testing a Flat Panel Speaker and Sound Diffuser for a Simulator

    Science.gov (United States)

    Dillon, Christina

    2013-01-01

    The goal of this project was to design, model, build, and test a flat panel speaker and frame for a spherical dome structure being made into a simulator. The simulator will be a test bed for evaluating an immersive environment for human interfaces. This project focused on the loud speakers and a sound diffuser for the dome. The rest of the team worked on an Ambisonics 3D sound system, video projection system, and multi-direction treadmill to create the most realistic scene possible. The main programs utilized in this project, were Pro-E and COMSOL. Pro-E was used for creating detailed figures for the fabrication of a frame that held a flat panel loud speaker. The loud speaker was made from a thin sheet of Plexiglas and 4 acoustic exciters. COMSOL, a multiphysics finite analysis simulator, was used to model and evaluate all stages of the loud speaker, frame, and sound diffuser. Acoustical testing measurements were utilized to create polar plots from the working prototype which were then compared to the COMSOL simulations to select the optimal design for the dome. The final goal of the project was to install the flat panel loud speaker design in addition to a sound diffuser on to the wall of the dome. After running tests in COMSOL on various speaker configurations, including a warped Plexiglas version, the optimal speaker design included a flat piece of Plexiglas with a rounded frame to match the curvature of the dome. Eight of these loud speakers will be mounted into an inch and a half of high performance acoustic insulation, or Thinsulate, that will cover the inside of the dome. The following technical paper discusses these projects and explains the engineering processes used, knowledge gained, and the projected future goals of this project

  18. A study on the sound quality evaluation model of mechanical air-cleaners

    DEFF Research Database (Denmark)

    Ih, Jeong-Guon; Jang, Su-Won; Jeong, Cheol-Ho

    2009-01-01

    In operating the air-cleaner for a long time, people in a quiet enclosed space expect low sound at low operational levels for a routine cleaning of air. However, in the condition of high operational levels of the cleaner, a powerful yet nonannoying sound is desired, which is connected to a feeling...... of an immediate cleaning of pollutants. In this context, it is important to evaluate and design the air-cleaner noise to satisfy such contradictory expectations from the customers. In this study, a model for evaluating the sound quality of air-cleaners of mechanical type was developed based on objective...... and subjective analyses. Sound signals from various aircleaners were recorded and they were edited by increasing or decreasing the loudness at three wide specific-loudness bands: 20-400 Hz (0-3.8 barks), 400-1250 Hz (3.8-10 barks), and 1.25- 12.5 kHz bands (10-22.8 barks). Subjective tests using the edited...

  19. Dementias show differential physiological responses to salient sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  20. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  1. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    Science.gov (United States)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  2. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    Science.gov (United States)

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  3. Two models of the sound-signal frequency dependence on the animal body size as exemplified by the ground squirrels of Eurasia (mammalia, rodentia).

    Science.gov (United States)

    Nikol'skii, A A

    2017-11-01

    Dependence of the sound-signal frequency on the animal body length was studied in 14 ground squirrel species (genus Spermophilus) of Eurasia. Regression analysis of the total sample yielded a low determination coefficient (R 2 = 26%), because the total sample proved to be heterogeneous in terms of signal frequency within the dimension classes of animals. When the total sample was divided into two groups according to signal frequency, two statistically significant models (regression equations) were obtained in which signal frequency depended on the body size at high determination coefficients (R 2 = 73 and 94% versus 26% for the total sample). Thus, the problem of correlation between animal body size and the frequency of their vocal signals does not have a unique solution.

  4. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  5. [Study of biometric identification of heart sound base on Mel-Frequency cepstrum coefficient].

    Science.gov (United States)

    Chen, Wei; Zhao, Yihua; Lei, Sheng; Zhao, Zikai; Pan, Min

    2012-12-01

    Heart sound is a physiological parameter with individual characteristics generated by heart beat. To do the individual classification and recognition, in this paper, we present our study of using wavelet transform in the signal denoising, with the Mel-Frequency cepstrum coefficients (MFCC) as the feature parameters, and propose a research of reducing the dimensionality through principal components analysis (PCA). We have done the preliminary study to test the feasibility of biometric identification method using heart sound. The results showed that under the selected experimental conditions, the system could reach a 90% recognition rate. This study can provide a reference for further research.

  6. Acoustic signal analysis in the creeping discharge

    International Nuclear Information System (INIS)

    Nakamiya, T; Sonoda, Y; Tsuda, R; Ebihara, K; Ikegami, T

    2008-01-01

    We have previously succeeded in measuring the acoustic signal due to the dielectric barrier discharge and discriminating the dominant frequency components of the acoustic signal. The dominant frequency components appear over 20kHz of acoustic signal by the dielectric barrier discharge. Recently surface discharge control technology has been focused from practical applications such as ozonizer, NO X reactors, light source or display. The fundamental experiments are carried to examine the creeping discharge using the acoustic signal. When the high voltage (6kV, f = 10kHz) is applied to the electrode, the discharge current flows and the acoustic sound is generated. The current, voltage waveforms of creeping discharge and the sound signal detected by the condenser microphone are stored in the digital memory scope. In this scheme, Continuous Wavelet Transform (CWT) is applied to discriminate the acoustic sound of the micro discharge and the dominant frequency components are studied. CWT results of sound signal show the frequency spectrum of wideband up to 100kHz. In addition, the energy distributions of acoustic signal are examined by CWT

  7. Velocity of sound in, and adiabatic compressibility of, Molten LiF-NaF, LiF-KF, NaF-KF mixtures

    International Nuclear Information System (INIS)

    Minchenko, V.I.; Konovalov, Y.V.; Smirnov, M.V.

    1986-01-01

    The authors measured the velocity of sound as a function of temperature at 1.5 zHM frequency in LiF-NaF, NaF-KF, LiF-KF melts over the entire range of their compositions. The measurements were made by comparison of the phases of a reference pulse signal and a signal reflected from the bottom of the crucible. The specified temperatures were maintained constant within plus or minus 1 degree. The sound conductor consisted of a cylindrical rod of sintered beryllium oxide, which does not interact with test melts. The study shows that the velocity of sound decreases linearly with increase of the temperature. The values of the constants of the empirical equations are presented in a table, with indication of the temperature range. The dependence of the velocity of sound on composition of the melts is shown, where isotherms for 1250 K are given as an example. Variation of the composition by 1-2 mole % leads to increase or decrease of the velocity of sound by 5-10 m

  8. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  9. The effect of frequency-specific sound signals on the germination of maize seeds.

    Science.gov (United States)

    Vicient, Carlos M

    2017-07-25

    The effects of sound treatments on the germination of maize seeds were determined. White noise and bass sounds (300 Hz) had a positive effect on the germination rate. Only 3 h treatment produced an increase of about 8%, and 5 h increased germination in about 10%. Fast-green staining shows that at least part of the effects of sound are due to a physical alteration in the integrity of the pericarp, increasing the porosity of the pericarp and facilitating oxygen availability and water and oxygen uptake. Accordingly, by removing the pericarp from the seeds the positive effect of the sound on the germination disappeared.

  10. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  11. Direct Signal-to-Noise Quality Comparison between an Electronic and Conventional Stethoscope aboard the International Space Station

    Science.gov (United States)

    Marshburn, Thomas; Cole, Richard; Ebert, Doug; Bauer, Pete

    2014-01-01

    Introduction: Evaluation of heart, lung, and bowel sounds is routinely performed with the use of a stethoscope to help detect a broad range of medical conditions. Stethoscope acquired information is even more valuable in a resource limited environments such as the International Space Station (ISS) where additional testing is not available. The high ambient noise level aboard the ISS poses a specific challenge to auscultation by stethoscope. An electronic stethoscope's ambient noise-reduction, greater sound amplification, recording capabilities, and sound visualization software may be an advantage to a conventional stethoscope in this environment. Methods: A single operator rated signal-to-noise quality from a conventional stethoscope (Littman 2218BE) and an electronic stethoscope (Litmann 3200). Borborygmi, pulmonic, and cardiac sound quality was ranked with both stethoscopes. Signal-to-noise rankings were preformed on a 1 to 10 subjective scale with 1 being inaudible, 6 the expected quality in an emergency department, 8 the expected quality in a clinic, and 10 the clearest possible quality. Testing took place in the Japanese Pressurized Module (JPM), Unity (Node 2), Destiny (US Lab), Tranquility (Node 3), and the Cupola of the International Space Station. All examinations were conducted at a single point in time. Results: The electronic stethoscope's performance ranked higher than the conventional stethoscope for each body sound in all modules tested. The electronic stethoscope's sound quality was rated between 7 and 10 in all modules tested. In comparison, the traditional stethoscope's sound quality was rated between 4 and 7. The signal to noise ratio of borborygmi showed the biggest difference between stethoscopes. In the modules tested, the auscultation of borborygmi was rated between 5 and 7 by the conventional stethoscope and consistently 10 by the electronic stethoscope. Discussion: This stethoscope comparison was limited to a single operator. However, we

  12. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  13. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  14. Portable system for auscultation and lung sound analysis.

    Science.gov (United States)

    Nabiev, Rustam; Glazova, Anna; Olyinik, Valery; Makarenkova, Anastasiia; Makarenkov, Anatolii; Rakhimov, Abdulvosid; Felländer-Tsai, Li

    2014-01-01

    A portable system for auscultation and lung sound analysis has been developed, including the original electronic stethoscope coupled with mobile devices and special algorithms for the automated analysis of pulmonary sound signals. It's planned that the developed system will be used for monitoring of health status of patients with various pulmonary diseases.

  15. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  16. Dementias show differential physiological responses to salient sounds

    Science.gov (United States)

    Fletcher, Phillip D.; Nicholas, Jennifer M.; Shakespeare, Timothy J.; Downey, Laura E.; Golden, Hannah L.; Agustus, Jennifer L.; Clark, Camilla N.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (“looming”) or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases. PMID:25859194

  17. Perception of environmental sounds by experienced cochlear implant patients

    Science.gov (United States)

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Objectives Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli, may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Design Seventeen experienced postlingually-deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception, and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern and temporal order for tones tests) and a backward digit recall test. Results The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants and r = 0.48 for vowels. HINT and

  18. HESTIA Commodities Exchange Pallet and Sounding Rocket Test Stand

    Science.gov (United States)

    Chaparro, Javier

    2013-01-01

    During my Spring 2016 internship, my two major contributions were the design of the Commodities Exchange Pallet and the design of a test stand for a 100 pounds-thrust sounding rocket. The Commodities Exchange Pallet is a prototype developed for the Human Exploration Spacecraft Testbed for Integration and Advancement (HESTIA) program. Under the HESTIA initiative the Commodities Exchange Pallet was developed as a method for demonstrating multi-system integration thru the transportation of In-Situ Resource Utilization produced oxygen and water to a human habitat. Ultimately, this prototype's performance will allow for future evaluation of integration, which may lead to the development of a flight capable pallet for future deep-space exploration missions. For HESTIA, my main task was to design the Commodities Exchange Pallet system to be used for completing an integration demonstration. Under the guidance of my mentor, I designed, both, the structural frame and fluid delivery system for the commodities pallet. The fluid delivery system includes a liquid-oxygen to gaseous-oxygen system, a water delivery system, and a carbon-dioxide compressors system. The structural frame is designed to meet safety and transportation requirements, as well as the ability to interface with the ER division's Portable Utility Pallet. The commodities pallet structure also includes independent instrumentation oxygen/water panels for operation and system monitoring. My major accomplishments for the commodities exchange pallet were the completion of the fluid delivery systems and the structural frame designs. In addition, parts selection was completed in order to expedite construction of the prototype, scheduled to begin in May of 2016. Once the commodities pallet is assembled and tested it is expected to complete a fully integrated transfer demonstration with the ISRU unit and the Environmental Control and Life Support System test chamber in September of 2016. In addition to the development of

  19. Active structural acoustic control for reduction of radiated sound from structure

    International Nuclear Information System (INIS)

    Hong, Jin Seok; Oh, Jae Eung

    2001-01-01

    Active control of sound radiation from a vibrating rectangular plate by a steady-state harmonic point force disturbance is experimentally studied. Structural excitation is achieved by two piezoceramic actuators mounted on the panel. Two accelerometers are implemented as error sensors. Estimated radiated sound signals using vibro-acoustic path transfer function are used as error signals. The vibro-acoustic path transfer function represents system between accelerometers and microphones. The approach is based on a multi-channel filtered-x LMS algorithm. The results shows that attenuation of sound levels of 11dB, 10dB is achieved

  20. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  1. The sound field of a rotating dipole in a plug flow.

    Science.gov (United States)

    Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H

    2018-04-01

    An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.

  2. Auditory Warnings, Signal-Referent Relations, and Natural Indicators: Re-Thinking Theory and Application

    Science.gov (United States)

    Petocz, Agnes; Keller, Peter E.; Stevens, Catherine J.

    2008-01-01

    In auditory warning design the idea of the strength of the association between sound and referent has been pivotal. Research has proceeded via constructing classification systems of signal-referent associations and then testing predictions about ease of learning of different levels of signal-referent relation strength across and within different…

  3. A basic study on universal design of auditory signals in automobiles.

    Science.gov (United States)

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  4. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    Science.gov (United States)

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  5. Signal Processing Implementation and Comparison of Automotive Spatial Sound Rendering Strategies

    Directory of Open Access Journals (Sweden)

    Bai MingsianR

    2009-01-01

    Full Text Available Design and implementation strategies of spatial sound rendering are investigated in this paper for automotive scenarios. Six design methods are implemented for various rendering modes with different number of passengers. Specifically, the downmixing algorithms aimed at balancing the front and back reproductions are developed for the 5.1-channel input. Other five algorithms based on inverse filtering are implemented in two approaches. The first approach utilizes binaural (Head-Related Transfer Functions HRTFs measured in the car interior, whereas the second approach named the point-receiver model targets a point receiver positioned at the center of the passenger's head. The proposed processing algorithms were compared via objective and subjective experiments under various listening conditions. Test data were processed by the multivariate analysis of variance (MANOVA method and the least significant difference (Fisher's LSD method as a post hoc test to justify the statistical significance of the experimental data. The results indicate that inverse filtering algorithms are preferred for the single passenger mode. For the multipassenger mode, however, downmixing algorithms generally outperformed the other processing techniques.

  6. A system for heart sounds classification.

    Directory of Open Access Journals (Sweden)

    Grzegorz Redlarski

    Full Text Available The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases - one of the major causes of death around the globe - a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability.

  7. A quick test of the WEP enabled by a sounding rocket

    Energy Technology Data Exchange (ETDEWEB)

    Reasenberg, Robert D; Patla, Biju R; Phillips, James D; Popescu, Eugeniu E; Rocco, Emanuele; Thapa, Rajesh [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Lorenzini, Enrico C, E-mail: reasenberg@cfa.harvard.edu [Faculty of Engineering, Universita di Padova, Padova I-35122 (Italy)

    2011-05-07

    We describe SR-POEM, a Galilean test of the weak equivalence principle (WEP), which is to be conducted during the free fall portion of a sounding rocket flight. This test of a single pair of substances is aimed at a measurement uncertainty of {sigma}({eta}) < 10{sup -16} after averaging the results of eight separate drops, each of 40 s duration. The WEP measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. We address the two sources of systematic error that are currently of greatest concern: magnetic force and electrostatic (patch effect) force on the test mass assemblies. The discovery of a violation ({eta} {ne} 0) would have profound implications for physics, astrophysics and cosmology.

  8. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    Energy Technology Data Exchange (ETDEWEB)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G., E-mail: akosovichev@solar.stanford.edu [Stanford University, HEPL, Stanford, CA 94305 (United States)

    2014-04-10

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  9. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    International Nuclear Information System (INIS)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G.

    2014-01-01

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  10. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    Science.gov (United States)

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  11. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  12. Sound generating flames of a gas turbine burner observed by laser-induced fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Hubschmid, W; Inauen, A.; Bombach, R.; Kreutner, W.; Schenker, S.; Zajadatz, M. [Alstom (Switzerland); Motz, C. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland); Paschereit, C.O. [Alstom (Switzerland)

    2002-03-01

    We performed 2-D OH LIF measurements to investigate the sound emission of a gas turbine combustor. The measured LIF signal was averaged over pulses at constant phase of the dominant acoustic oscillation. A periodic variation in intensity and position of the signal is observed and it is related to the measured sound intensity. (author)

  13. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  14. Semi-continuous ultrasonic sounding and changes of ultrasonic signal characteristics as a sensitive tool for the evaluation of ongoing microstructural changes of experimental mortar bars tested for their ASR potential

    Czech Academy of Sciences Publication Activity Database

    Lokajíček, Tomáš; Kuchařová, A.; Petružálek, Matěj; Šachlová, Š.; Svitek, Tomáš; Přikryl, R.

    2016-01-01

    Roč. 71, September (2016), s. 40-50 ISSN 0041-624X R&D Projects: GA ČR(CZ) GAP104/12/0915 Institutional support: RVO:67985831 Keywords : alkali-silica reaction * accelerated test * thermal heating * mortar bar * ultrasonic sounding Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.327, year: 2016

  15. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  16. A note on measurement of sound pressure with intensity probes

    DEFF Research Database (Denmark)

    Juhl, Peter; Jacobsen, Finn

    2004-01-01

    be improved under a variety of realistic sound field conditions by applying a different weighting of the two pressure signals from the probe. The improved intensity probe can measure the sound pressure more accurately at high frequencies than an ordinary sound intensity probe or an ordinary sound level meter......The effect of scattering and diffraction on measurement of sound pressure with "two-microphone" sound intensity probes is examined using an axisymmetric boundary element model of the probe. Whereas it has been shown a few years ago that the sound intensity estimated with a two-microphone probe...... is reliable up to 10 kHz when using 0.5 in. microphones in the usual face-to-face arrangement separated by a 12 mm spacer, the sound pressure measured with the same instrument will typically be underestimated at high frequencies. It is shown in this paper that the estimate of the sound pressure can...

  17. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  18. Physiological phenotyping of dementias using emotional sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  19. Measuring the speed of sound in air using smartphone applications

    Science.gov (United States)

    Yavuz, A.

    2015-05-01

    This study presents a revised version of an old experiment available in many textbooks for measuring the speed of sound in air. A signal-generator application in a smartphone is used to produce the desired sound frequency. Nodes of sound waves in a glass pipe, of which one end is immersed in water, are more easily detected, so results can be obtained more quickly than from traditional acoustic experiments using tuning forks.

  20. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  1. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  2. CAMAC based Test Signal Generator using Re-configurable device

    International Nuclear Information System (INIS)

    Sharma, Atish; Raval, Tushar; Srivastava, Amit K; Reddy, D Chenna

    2010-01-01

    There are many different types of signal generators, with different purposes and applications (and at varying levels of expense). In general, no device is suitable for all possible applications. Hence the selection of signal generator is as per requirements. For SST-1 Data Acquisition System requirements, we have developed a CAMAC based Test Signal Generator module using Re-configurable device (CPLD). This module is based on CAMAC interface but can be used for testing both CAMAC and PXI Data Acquisition Systems in SST-1 tokamak. It can also be used for other similar applications. Unlike traditional signal generators, which are embedded hardware, it is a flexible hardware unit, programmable through Graphical User Interface (GUI) developed in LabVIEW application development tool. The main aim of this work is to develop a signal generator for testing our data acquisition interface for a large number of channels simultaneously. The module front panel has various connectors like LEMO and D type connectors for signal interface. The module can be operated either in continuous signal generation mode or in triggered mode depending upon application. This can be done either by front panel switch or through CAMAC software commands (for remote operation). Similarly module reset and trigger generation operation can be performed either through front panel push button switch or through software CAMAC commands. The module has the facility to accept external TTL level trigger and clock through LEMO connectors. The module can also generate trigger and the clock signal, which can be delivered to other devices through LEMO connectors. The module generates two types of signals: Analog and digital (TTL level). The analog output (single channel) is generated from Digital to Analog Converter through CPLD for various types of waveforms like Sine, Square, Triangular and other wave shape that can vary in amplitude as well as in frequency. The module is quite useful to test up to 32 channels

  3. Flights of fear: a mechanical wing whistle sounds the alarm in a flocking bird.

    Science.gov (United States)

    Hingee, Mae; Magrath, Robert D

    2009-12-07

    Animals often form groups to increase collective vigilance and allow early detection of predators, but this benefit of sociality relies on rapid transfer of information. Among birds, alarm calls are not present in all species, while other proposed mechanisms of information transfer are inefficient. We tested whether wing sounds can encode reliable information on danger. Individuals taking off in alarm fly more quickly or ascend more steeply, so may produce different sounds in alarmed than in routine flight, which then act as reliable cues of alarm, or honest 'index' signals in which a signal's meaning is associated with its method of production. We show that crested pigeons, Ocyphaps lophotes, which have modified flight feathers, produce distinct wing 'whistles' in alarmed flight, and that individuals take off in alarm only after playback of alarmed whistles. Furthermore, amplitude-manipulated playbacks showed that response depends on whistle structure, such as tempo, not simply amplitude. We believe this is the first demonstration that flight noise can send information about alarm, and suggest that take-off noise could provide a cue of alarm in many flocking species, with feather modification evolving specifically to signal alarm in some. Similar reliable cues or index signals could occur in other animals.

  4. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  5. Binaural loudness for artificial-head measurements in directional sound fields

    DEFF Research Database (Denmark)

    Sivonen, Ville Pekka; Ellermeier, Wolfgang

    2008-01-01

    The effect of the sound incidence angle on loudness was investigated for fifteen listeners who matched the loudness of sounds coming from five different incidence angles in the horizontal plane to that of the same sound with frontal incidence. The stimuli were presented via binaural synthesis...... by using head-related transfer functions measured for an artificial head. The results, which exhibited marked individual differences, show that loudness depends on the direction from which a sound reaches the listener. The average results suggest a relatively simple rule for combining the two signals...... at the ears of an artificial head for binaural loudness predictions....

  6. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  7. Masking release by combined spatial and masker-fluctuation effects in the open sound field.

    Science.gov (United States)

    Middlebrooks, John C

    2017-12-01

    In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated. Comodulation masking release (CMR) was largely independent of signal location at a higher masker sound level, but at a lower level CMR was reduced for the lateral signal location. In Experiment 2, a brief signal was positioned at the envelope maximum (peak) or minimum (dip) of a 50-Hz-wide on-signal masker. Masking was released in dip more than in peak conditions only for the 90° signal. Overall, open-field SRM was greater in magnitude than binaural masking release reported in comparable closed-field studies, and envelope-related release was somewhat weaker. Mutual enhancement of masking release by spatial and envelope-related effects tended to increase with increasing masker level.

  8. Zero sound and quasiwave: separation in the magnetic field

    International Nuclear Information System (INIS)

    Bezuglyj, E.V.; Bojchuk, A.V.; Burma, N.G.; Fil', V.D.

    1995-01-01

    Theoretical and experimental results on the behavior of the longitudinal and transverse electron sound in a weak magnetic field are presented. It is shown theoretically that the effects of the magnetic field on zero sound velocity and ballistic transfer are opposite in sign and have sufficiently different dependences on the sample width, excitation frequency and relaxation time. This permits us to separate experimentally the Fermi-liquid and ballistic contributions in the electron sound signals. For the first time the ballistic transfer of the acoustic excitation by the quasiwave has been observed in zero magnetic field

  9. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  10. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    Science.gov (United States)

    Kauahikaua, J.

    A controlled source, time domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The geoelectric structure was determined as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves are qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered earth Marquardt inversion computer program. The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm meters.

  11. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  12. 30 CFR 7.408 - Test for flame resistance of signaling cables.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Test for flame resistance of signaling cables..., Signaling Cables, and Cable Splice Kits § 7.408 Test for flame resistance of signaling cables. (a) Test... either and support and the center support. (6) After subjecting the test specimen to external flame for...

  13. Surface return direction-of-arrival analysis for radar ice sounding surface clutter suppression

    DEFF Research Database (Denmark)

    Nielsen, Ulrik; Dall, Jørgen

    2015-01-01

    Airborne radar ice sounding is challenged by surface clutter masking the depth signal of interest. Surface clutter may even be prohibitive for potential space-based ice sounding radars. To some extent the radar antenna suppresses the surface clutter, and a multi-phase-center antenna in combination...... with coherent signal processing techniques can improve the suppression, in particular if the direction of arrival (DOA) of the clutter signal is estimated accurately. This paper deals with data-driven DOA estimation. By using P-band data from the ice shelf in Antarctica it is demonstrated that a varying...

  14. High frequency components of tracheal sound are emphasized during prolonged flow limitation

    International Nuclear Information System (INIS)

    Tenhunen, M; Huupponen, E; Saastamoinen, A; Kulkas, A; Himanen, S-L; Rauhala, E

    2009-01-01

    A nasal pressure transducer, which is used to study nocturnal airflow, also provides information about the inspiratory flow waveform. A round flow shape is presented during normal breathing. A flattened, non-round shape is found during hypopneas and it can also appear in prolonged episodes. The significance of this prolonged flow limitation is still not established. A tracheal sound spectrum has been analyzed further in order to achieve additional information about breathing during sleep. Increased sound frequencies over 500 Hz have been connected to obstruction of the upper airway. The aim of the present study was to examine the tracheal sound signal content of prolonged flow limitation and to find out whether prolonged flow limitation would consist of abundant high frequency activity. Sleep recordings of 36 consecutive patients were examined. The tracheal sound spectral analysis was performed on 10 min episodes of prolonged flow limitation, normal breathing and periodic apnea-hypopnea breathing. The highest total spectral amplitude, implicating loudest sounds, occurred during flow-limited breathing which also presented loudest sounds in all frequency bands above 100 Hz. In addition, the tracheal sound signal during flow-limited breathing constituted proportionally more high frequency activities compared to normal breathing and even periodic apnea-hypopnea breathing

  15. 33 CFR 83.36 - Signals to attract attention (Rule 36).

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Signals to attract attention... SECURITY INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.36 Signals to attract attention (Rule 36). If necessary to attract the attention of another vessel, any vessel may make light or sound...

  16. Exploring science with sound: sonification and the use of sonograms as data analysis tool

    CERN Multimedia

    CERN. Geneva; Williams, Genevieve

    2017-01-01

    Resonances, periodicity, patterns and spectra are well-known notions that play crucial roles in particle physics, and that have always been at the junction between sound/music analysis and scientific exploration. Detecting the shape of a particular energy spectrum, studying the stability of a particle beam in a synchrotron, and separating signals from a noisy background are just a few examples where the connection with sound can be very strong, all sharing the same concepts of oscillations, cycles and frequency. This seminar will focus on analysing data and their relations by translating measurements into audible signals and using the natural capability of the ear to distinguish, characterise and analyse waveform shapes, amplitudes and relations. This process is called data sonification, and one of the main tools to investigate the structure of the sound is the sonogram (sometimes also called a spectrogram). A sonogram is a visual representation of how the spectrum of a certain sound signal changes with time...

  17. Time domain acoustic contrast control implementation of sound zones for low-frequency input signals

    DEFF Research Database (Denmark)

    Schellekens, Daan H. M.; Møller, Martin Bo; Olsen, Martin

    2016-01-01

    Sound zones are two or more regions within a listening space where listeners are provided with personal audio. Acoustic contrast control (ACC) is a sound zoning method that maximizes the average squared sound pressure in one zone constrained to constant pressure in other zones. State......-of-the-art time domain broadband acoustic contrast control (BACC) methods are designed for anechoic environments. These methods are not able to realize a flat frequency response in a limited frequency range within a reverberant environment. Sound field control in a limited frequency range is a requirement...... to accommodate the effective working range of the loudspeakers. In this paper, a new BACC method is proposed which results in an implementation realizing a flat frequency response in the target zone. This method is applied in a bandlimited low-frequency scenario where the loudspeaker layout surrounds two...

  18. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    Science.gov (United States)

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  19. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  20. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  1. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  2. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  3. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  4. Time reversal signal processing in acoustic emission testing

    Czech Academy of Sciences Publication Activity Database

    Převorovský, Zdeněk; Krofta, Josef; Kober, Jan; Dvořáková, Zuzana; Chlada, Milan; Dos Santos, S.

    2014-01-01

    Roč. 19, č. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] Institutional support: RVO:61388998 Keywords : acoustic emission (AE) * ultrasonic testing (UT) * signal processing * source location * time reversal acoustic s * acoustic emission * signal processing and transfer Subject RIV: BI - Acoustic s http://www.ndt.net/events/ECNDT2014/app/content/Slides/637_Prevorovsky.pdf

  5. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  6. Oscillation-based test in mixed-signal circuits

    CERN Document Server

    Sánchez, Gloria Huertas; Rueda, Adoración Rueda

    2007-01-01

    This book presents the development and experimental validation of the structural test strategy called Oscillation-Based Test - OBT in short. The results presented here assert, not only from a theoretical point of view, but also based on a wide experimental support, that OBT is an efficient defect-oriented test solution, complementing the existing functional test techniques for mixed-signal circuits.

  7. Underwater Sound Levels at a Wave Energy Device Testing Facility in Falmouth Bay, UK.

    Science.gov (United States)

    Garrett, Joanne K; Witt, Matthew J; Johanning, Lars

    2016-01-01

    Passive acoustic monitoring devices were deployed at FaBTest in Falmouth Bay, UK, a marine renewable energy device testing facility during trials of a wave energy device. The area supports considerable commercial shipping and recreational boating along with diverse marine fauna. Noise monitoring occurred during (1) a baseline period, (2) installation activity, (3) the device in situ with inactive power status, and (4) the device in situ with active power status. This paper discusses the preliminary findings of the sound recording at FabTest during these different activity periods of a wave energy device trial.

  8. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  9. Influence of multi-microphone signal enhancement algorithms on auditory movement detection in acoustically complex situations

    DEFF Research Database (Denmark)

    Lundbeck, Micha; Hartog, Laura; Grimm, Giso

    2017-01-01

    The influence of hearing aid (HA) signal processing on the perception of spatially dynamic sounds has not been systematically investigated so far. Previously, we observed that interfering sounds impaired the detectability of left-right source movements and reverberation that of near-far source...... movements for elderly hearing-impaired (EHI) listeners (Lundbeck et al., 2017). Here, we explored potential ways of improving these deficits with HAs. To that end, we carried out acoustic analyses to examine the impact of two beamforming algorithms and a binaural coherence-based noise reduction scheme...... on the cues underlying movement perception. While binaural cues remained mostly unchanged, there were greater monaural spectral changes and increases in signal-to-noise ratio and direct-to-reverberant sound ratio as a result of the applied processing. Based on these findings, we conducted a listening test...

  10. FPGA based mixed-signal circuit novel testing techniques

    International Nuclear Information System (INIS)

    Pouros, Sotirios; Vassios, Vassilios; Papakostas, Dimitrios; Hristov, Valentin

    2013-01-01

    Electronic circuits fault detection techniques, especially on modern mixed-signal circuits, are evolved and customized around the world to meet the industry needs. The paper presents techniques used on fault detection in mixed signal circuits. Moreover, the paper involves standardized methods, along with current innovations for external testing like Design for Testability (DfT) and Built In Self Test (BIST) systems. Finally, the research team introduces a circuit implementation scheme using FPGA

  11. Teaching Acoustic Properties of Materials in Secondary School: Testing Sound Insulators

    Science.gov (United States)

    Hernandez, M. I.; Couso, D.; Pinto, R.

    2011-01-01

    Teaching the acoustic properties of materials is a good way to teach physics concepts, extending them into the technological arena related to materials science. This article describes an innovative approach for teaching sound and acoustics in combination with sound insulating materials in secondary school (15-16-year-old students). Concerning the…

  12. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  13. Sound signatures and production mechanisms of three species of pipefishes (Family: Syngnathidae

    Directory of Open Access Journals (Sweden)

    Adam Chee Ooi Lim

    2015-12-01

    Full Text Available Background. Syngnathid fishes produce three kinds of sounds, named click, growl and purr. These sounds are generated by different mechanisms to give a consistent signal pattern or signature which is believed to play a role in intraspecific and interspecific communication. Commonly known sounds are produced when the fish feeds (click, purr or is under duress (growl. While there are more acoustic studies on seahorses, pipefishes have not received much attention. Here we document the differences in feeding click signals between three species of pipefishes and relate them to cranial morphology and kinesis, or the sound-producing mechanism.Methods. The feeding clicks of two species of freshwater pipefishes, Doryichthys martensii and Doryichthys deokhathoides and one species of estuarine pipefish, Syngnathoides biaculeatus, were recorded by a hydrophone in acoustic dampened tanks. The acoustic signals were analysed using time-scale distribution (or scalogram based on wavelet transform. A detailed time-varying analysis of the spectral contents of the localized acoustic signal was obtained by jointly interpreting the oscillogram, scalogram and power spectrum. The heads of both Doryichthys species were prepared for microtomographical scans which were analysed using a 3D imaging software. Additionally, the cranial bones of all three species were examined using a clearing and double-staining method for histological studies.Results. The sound characteristics of the feeding click of the pipefish is species-specific, appearing to be dependent on three bones: the supraoccipital, 1st postcranial plate and 2nd postcranial plate. The sounds are generated when the head of the Dorichthyes pipefishes flexes backward during the feeding strike, as the supraoccipital slides backwards, striking and pushing the 1st postcranial plate against (and striking the 2nd postcranial plate. In the Syngnathoides pipefish, in the absence of the 1st postcranial plate, the

  14. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  15. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    International Nuclear Information System (INIS)

    Kauahikaua, J.

    1981-01-01

    A controlled source, time-domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The goal of this survey was the determination of the geoelectric structure as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high-level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves can be qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered-earth Marquardt inversion computer program (Kauahikaua, 1980). The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm-meters

  16. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  17. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  18. Binaural loudness summation for directional sounds

    DEFF Research Database (Denmark)

    Sivonen, Ville Pekka; Ellermeier, Wolfgang

    2006-01-01

    the binaural loudness summation of the at-ear signals. Even though the effects of HRTFs were taken into account, considerable individual differences in the binaural summation of loudness remained. In order to create conditions in which the directional at-ear changes were identical for all participants......, the present experiment employed 'generic' HRTFs to create directional sounds via binaural synthesis. When inspecting the results of the listening tests, however, large individual differences were still evident, as in the earlier study. The generality of this finding was further corroborated by running...... an independent, inexperienced sample of ten participants exclusively being exposed to the present generic HRTFs. Despite the individual differences, the average results suggest a relatively simple rule for combining the binaural input when carrying out acoustical measurements using an artificial head...

  19. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  20. Towards parameter-free classification of sound effects in movies

    Science.gov (United States)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  1. Application of wavelet analysis to signal processing methods for eddy-current test

    International Nuclear Information System (INIS)

    Chen, G.; Yoneyama, H.; Yamaguchi, A.; Uesugi, N.

    1998-01-01

    This study deals with the application of wavelet analysis to detection and characterization of defects from eddy-current and ultrasonic testing signals of a low signal-to-noise ratio. Presented in this paper are the methods for processing eddy-current testing signals of heat exchanger tubes of a steam generator in a nuclear power plant. The results of processing eddy-current testing signals of tube testpieces with artificial flaws show that the flaw signals corrupted by noise and/or non-defect signals can be effectively detected and characterized by using the wavelet methods. (author)

  2. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  3. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....

  4. Production of grooming-associated sounds by chimpanzees (Pan troglodytes) at Ngogo: variation, social learning, and possible functions.

    Science.gov (United States)

    Watts, David P

    2016-01-01

    Chimpanzees (Pan troglodytes) use some communicative signals flexibly and voluntarily, with use influenced by learning. These signals include some vocalizations and also sounds made using the lips, oral cavity, and/or teeth, but not the vocal tract, such as "attention-getting" sounds directed at humans by captive chimpanzees and lip smacking during social grooming. Chimpanzees at Ngogo, in Kibale National Park, Uganda, make four distinct sounds while grooming others. Here, I present data on two of these ("splutters" and "teeth chomps") and consider whether social learning contributes to variation in their production and whether they serve social functions. Higher congruence in the use of these two sounds between dyads of maternal relatives than dyads of non-relatives implies that social learning occurs and mostly involves vertical transmission, but the results are not conclusive and it is unclear which learning mechanisms may be involved. In grooming between adult males, tooth chomps and splutters were more likely in long than in short bouts; in bouts that were bidirectional rather than unidirectional; in grooming directed toward high-ranking males than toward low-ranking males; and in bouts between allies than in those between non-allies. Males were also more likely to make these sounds while they were grooming other males than while they were grooming females. These results are expected if the sounds promote social bonds and induce tolerance of proximity and of grooming by high-ranking males. However, the alternative hypothesis that the sounds are merely associated with motivation to groom, with no additional social function, cannot be ruled out. Limited data showing that bouts accompanied by teeth chomping or spluttering at their initiation were longer than bouts for which this was not the case point toward a social function, but more data are needed for a definitive test. Comparison to other research sites shows that the possible existence of grooming

  5. Physically based sound synthesis and control of jumping sounds on an elastic trampoline

    DEFF Research Database (Denmark)

    Turchet, Luca; Pugliese, Roberto; Takala, Tapio

    2013-01-01

    This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine was contr......This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine...... was controlled in real-time by pro- cessing the signal captured by a contact microphone which was attached to the membrane of the trampoline in order to detect each jump. A user study was conducted to evaluate the quality of the in- teractive sonification. Results proved the success of the proposed algorithms...

  6. Artificial intelligence techniques used in respiratory sound analysis--a systematic review.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian

    2014-02-01

    Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.

  7. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  8. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  9. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  10. Differential Intracochlear Sound Pressure Measurements in Human Temporal Bones with an Off-the-Shelf Sensor

    Directory of Open Access Journals (Sweden)

    Martin Grossöhmichen

    2016-01-01

    Full Text Available The standard method to determine the output level of acoustic and mechanical stimulation to the inner ear is measurement of vibration response of the stapes in human cadaveric temporal bones (TBs by laser Doppler vibrometry. However, this method is reliable only if the intact ossicular chain is stimulated. For other stimulation modes an alternative method is needed. The differential intracochlear sound pressure between scala vestibuli (SV and scala tympani (ST is assumed to correlate with excitation. Using a custom-made pressure sensor it has been successfully measured and used to determine the output level of acoustic and mechanical stimulation. To make this method generally accessible, an off-the-shelf pressure sensor (Samba Preclin 420 LP, Samba Sensors was tested here for intracochlear sound pressure measurements. During acoustic stimulation, intracochlear sound pressures were simultaneously measurable in SV and ST between 0.1 and 8 kHz with sufficient signal-to-noise ratios with this sensor. The pressure differences were comparable to results obtained with custom-made sensors. Our results demonstrated that the pressure sensor Samba Preclin 420 LP is usable for measurements of intracochlear sound pressures in SV and ST and for the determination of differential intracochlear sound pressures.

  11. Vespertilionid bats control the width of their biosonar sound beam dynamically during prey pursuit

    DEFF Research Database (Denmark)

    Jakobsen, Lasse; Surlykke, Annemarie

    2010-01-01

    Animals using sound for communication emit directional signals, focusing most acoustic energy in one direction. Echolocating bats are listening for soft echoes from insects. Therefore, a directional biosonar sound beam greatly increases detection probability in the forward direction and decreases...

  12. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  13. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  14. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  15. Sound absorption coefficient of coal bottom ash concrete for railway application

    Science.gov (United States)

    Ramzi Hannan, N. I. R.; Shahidan, S.; Maarof, Z.; Ali, N.; Abdullah, S. R.; Ibrahim, M. H. Wan

    2017-11-01

    A porous concrete able to reduce the sound wave that pass through it. When a sound waves strike a material, a portion of the sound energy was reflected back and another portion of the sound energy was absorbed by the material while the rest was transmitted. The larger portion of the sound wave being absorbed, the lower the noise level able to be lowered. This study is to investigate the sound absorption coefficient of coal bottom ash (CBA) concrete compared to the sound absorption coefficient of normal concrete by carried out the impedance tube test. Hence, this paper presents the result of the impedance tube test of the CBA concrete and normal concrete.

  16. An experimental study on the sound and frequency of the Chinese ancient variable bell

    International Nuclear Information System (INIS)

    Chen Dongsheng; Hu Haining; Xing Lirong; Liu Yongsheng

    2009-01-01

    This paper describes an interesting sound phenomenon from a modern copy of the Chinese ancient variable bell which can emit distinctly different sounds at different temperatures. By means of audition-spectrum analyser software-and PC, the sound signals of the variable bell are collected and the fundamental spectra are shown on the PC. The configuration is simple and cheap, suitable for demonstration and laboratory exercises

  17. An experimental study on the sound and frequency of the Chinese ancient variable bell

    Energy Technology Data Exchange (ETDEWEB)

    Chen Dongsheng; Hu Haining; Xing Lirong; Liu Yongsheng [Department of Maths and Physics, Shanghai University of Electric Power, 200090 Shanghai (China)], E-mail: cds781@hotmail.com

    2009-05-15

    This paper describes an interesting sound phenomenon from a modern copy of the Chinese ancient variable bell which can emit distinctly different sounds at different temperatures. By means of audition-spectrum analyser software-and PC, the sound signals of the variable bell are collected and the fundamental spectra are shown on the PC. The configuration is simple and cheap, suitable for demonstration and laboratory exercises.

  18. Acoustic Method for Testing the Quality of Sterilized Male Tsetse Flies Glossina Pallidipes

    Energy Technology Data Exchange (ETDEWEB)

    Kratochvil, H [Department of Evolutionary Biology, University of Vienna, Halsriegelstr. 34, Vienna A-1090 (Austria); Noll, A [Institut fuer Schallforschung, Oe Ak d Wiss, Wohllebengasse 12-14, Vienna A-1040 (Austria); Bolldorf, J [Umweltbundesamt, Spittelauer Laende 5, Vienna A-1090 (Austria); Parker, A G [Joint FAO/IAEA Programme of Nuclear Techniques in Food and Agriculture, FAO/IAEA Agriculture and Biotechnology Laboratory, Seibersdorf A-2444 (Austria)

    2012-07-15

    Tsetse flies are able to emit different acoustic signals. An acoustic method to test the quality of sterilized male tsetse flies was developed. Differences in the sound characteristics between males and females, between sterilized and unsterilized males, and between males sterilized in air and nitrogen, were determined. Also, the acoustic parameters (frequency, time, sound pressure level) of the sounds that are useful as criteria for quality control were determined. It was demonstrated that only the so-called 'feeding sounds' can be used as a quality criterion. Both sexes emitted feeding sounds while feeding on a host. These sounds were also used to find sexual partners, and had an effect on male copulation success. An acoustic sound analysis programme was developed; it automatically measured sound activity (only feeding sounds) under standard conditions (random sample, relative humidity, temperature, light intensity). (author)

  19. Sensory illusions: Common mistakes in physics regarding sound, light and radio waves

    Science.gov (United States)

    Briles, T. M.; Tabor-Morris, A. E.

    2013-03-01

    Optical illusions are well known as effects that we see that are not representative of reality. Sensory illusions are similar but can involve other senses than sight, such as hearing or touch. One mistake commonly noted among instructors is that students often mis-identify radio signals as sound waves and not as part of the electromagnetic spectrum. A survey of physics students from multiple high schools highlights the frequency of this common misconception, as well as other nuances on this misunderstanding. Many students appear to conclude that, since they experience radio broadcasts as sound, then sound waves are the actual transmission of radio signals and not, as is actually true, a representation of those waves as produced by the translator box, the radio. Steps to help students identify and correct sensory illusion misconceptions are discussed. School of Education

  20. Noise and noise disturbances from wind power plants - Tests with interactive control of sound parameters for more comfortable and less perceptible sounds

    International Nuclear Information System (INIS)

    Persson-Waye, K.; Oehrstroem, E.; Bjoerkman, M.; Agge, A.

    2001-12-01

    In experimental pilot studies, a methodology has been worked out for interactively varying sound parameters in wind power plants. In the tests, 24 persons varied the center frequency of different band-widths, the frequency of a sinus-tone and the amplitude-modulation of a sinus-tone in order to create as comfortable a sound as possible. The variations build on the noise from the two wind turbines Bonus and Wind World. The variations were performed with a constant dba level. The results showed that the majority preferred a low-frequency tone (94 Hz and 115 Hz for Wind World and Bonus, respectively). The mean of the most comfortable amplitude-modulation varied between 18 and 22 Hz, depending on the ground frequency. The mean of the center-frequency for the different band-widths varied from 785 to 1104 Hz. In order to study the influence of the wind velocity on the acoustic character of the noise, a long-time measurement program has been performed. A remotely controlled system has been developed, where wind velocity, wind direction, temperature and humidity are registered simultaneously with the noise. Long-time registrations have been performed for four different wing turbines

  1. Design of Meter-Scale Antenna and Signal Detection System for Underground Magnetic Resonance Sounding in Mines.

    Science.gov (United States)

    Yi, Xiaofeng; Zhang, Jian; Fan, Tiehu; Tian, Baofeng; Jiang, Chuandong

    2018-03-13

    Magnetic resonance sounding (MRS) is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS) detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7-9 m in front of the wall for underground mining projects.

  2. Design of Meter-Scale Antenna and Signal Detection System for Underground Magnetic Resonance Sounding in Mines

    Directory of Open Access Journals (Sweden)

    Xiaofeng Yi

    2018-03-01

    Full Text Available Magnetic resonance sounding (MRS is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7–9 m in front of the wall for underground mining projects.

  3. Search for fourth sound propagation in supersolid 4He

    International Nuclear Information System (INIS)

    Aoki, Y.; Kojima, H.; Lin, X.

    2008-01-01

    A systematic study is carried out to search for fourth sound propagation solid 4 He samples below 500 mK down to 40 mK between 25 and 56 bar using the techniques of heat pulse generator and titanium superconducting transition edge bolometer. If solid 4 He is endowed with superfluidity below 200 mK, as indicated by recent torsional oscillator experiments, theories predict fourth sound propagation in such a supersolid state. If found, fourth sound would provide convincing evidence for superfluidity and a new tool for studying the new phase. The search for a fourth sound-like mode is based on the response of the bolometers to heat pulses traveling through cylindrical samples of solids grown with different crystal qualities. Bolometers with increasing sensitivity are constructed. The heater generator amplitude is reduced to the sensitivity limit to search for any critical velocity effects. The fourth sound velocity is expected to vary as ∞ √ Ρ s /ρ. Searches for a signature in the bolometer response with such a characteristic temperature dependence are made. The measured response signal has not so far revealed any signature of a new propagating mode within a temperature excursion of 5 μK from the background signal shape. Possible reasons for this negative result are discussed. Prior to the fourth sound search, the temperature dependence of heat pulse propagation was studied as it transformed from 'second sound' in the normal solid 4 He to transverse ballistic phonon propagation. Our work extends the studies of [V. Narayanamurti and R. C. Dynes, Phys. Rev. B 12, 1731 (1975)] to higher pressures and to lower temperatures. The measured transverse ballistic phonon propagation velocity is found to remain constant (within the 0.3% scatter of the data) below 100 mK at all pressures and reveals no indication of an onset of supersolidity. The overall dynamic thermal response of solid to heat input is found to depend strongly on the sample preparation procedure

  4. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  5. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  6. A Test for the Presence of a Signal

    OpenAIRE

    Rolke, Wolfgang A.; Lopez, Angel M.

    2006-01-01

    We describe a statistical hypothesis test for the presence of a signal based on the likelihood ratio statistic. We derive the test for a case of interest and also show that for that case the test works very well, even far out in the tails of the distribution. We also study extensions of the test to cases where there are multiple channels.

  7. Diagnostic validity of methods for assessment of swallowing sounds: a systematic review.

    Science.gov (United States)

    Taveira, Karinna Veríssimo Meira; Santos, Rosane Sampaio; Leão, Bianca Lopes Cavalcante de; Neto, José Stechman; Pernambuco, Leandro; Silva, Letícia Korb da; De Luca Canto, Graziela; Porporatti, André Luís

    2018-02-03

    Oropharyngeal dysphagia is a highly prevalent comorbidity in neurological patients and presents a serious health threat, which may lead to outcomes of aspiration pneumonia, ranging from hospitalization to death. This assessment proposes a non-invasive, acoustic-based method to differentiate between individuals with and without signals of penetration and aspiration. This systematic review evaluated the diagnostic validity of different methods for assessment of swallowing sounds, when compared to Videofluroscopic of Swallowing Study (VFSS) to detect oropharyngeal dysphagia. Articles in which the primary objective was to evaluate the accuracy of swallowing sounds were searched in five electronic databases with no language or time limitations. Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v. 5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. The final electronic search revealed 554 records, however only 3 studies met the inclusion criteria. The accuracy values (area under the curve) were 0.94 for microphone, 0.80 for Doppler, and 0.60 for stethoscope. Based on limited evidence and low methodological quality because few studies were included, with a small sample size, from all index testes found for this systematic review, Doppler showed excellent diagnostic accuracy for the discrimination of swallowing sounds, whereas microphone-reported good accuracy discrimination of swallowing sounds of dysphagic patients and stethoscope showed best screening test. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  8. Objective Scaling of Sound Quality for Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    ) Subjective sound quality ratings of clean and distorted speech and music signals, by normal-hearing and hearing-impaired listeners, to provide reference data, 2) An auditory model of the ear, including the effects of hearing loss, based on existing psychoacoustic knowledge, coupled to 3) An artificial neural......A new method for the objective estimation of sound quality for both normal-hearing and hearing-impaired listeners has been presented: OSSQAR (Objective Scaling of Sound Quality and Reproduction). OSSQAR is based on three main parts, which have been carried out and documented separately: 1...... network, which was trained to predict the sound quality ratings. OSSQAR predicts the perceived sound quality on two independent perceptual rating scales: Clearness and Sharpness. These two scales were shown to be the most relevant for assessment of sound quality, and they were interpreted the same way...

  9. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  10. A Coincidental Sound Track for "Time Flies"

    Science.gov (United States)

    Cardany, Audrey Berger

    2014-01-01

    Sound tracks serve a valuable purpose in film and video by helping tell a story, create a mood, and signal coming events. Holst's "Mars" from "The Planets" yields a coincidental soundtrack to Eric Rohmann's Caldecott-winning book, "Time Flies." This pairing provides opportunities for upper elementary and…

  11. Design and preliminary test results at Mach 5 of an axisymmetric slotted sound shield. [for supersonic wind tunnels (noise reduction in wind tunnel nozzles)

    Science.gov (United States)

    Beckwith, I. E.; Spokowski, A. J.; Harvey, W. D.; Stainback, P. C.

    1975-01-01

    The basic theory and sound attenuation mechanisms, the design procedures, and preliminary experimental results are presented for a small axisymmetric sound shield for supersonic wind tunnels. The shield consists of an array of small diameter rods aligned nearly parallel to the entrance flow with small gaps between the rods for boundary layer suction. Results show that at the lowest test Reynolds number (based on rod diameter) of 52,000 the noise shield reduced the test section noise by about 60 percent ( or 8 db attenuation) but no attenuation was measured for the higher range of test reynolds numbers from 73,000 to 190,000. These results are below expectations based on data reported elsewhere on a flat sound shield model. The smaller attenuation from the present tests is attributed to insufficient suction at the gaps to prevent feedback of vacuum manifold noise into the shielded test flow and to insufficient suction to prevent transition of the rod boundary layers to turbulent flow at the higher Reynolds numbers. Schlieren photographs of the flow are shown.

  12. Repeatability and reproducibility of in situ measurements of sound reflection and airborne sound insulation index of noise barriers

    NARCIS (Netherlands)

    Garai, M.; Schoen, E.; Behler, G.; Bragado, B.; Chudalla, M.; Conter, M.; Defrance, J.; Demizieux, P.; Glorieux, C.; Guidorzi, P.

    2014-01-01

    In Europe, in situ measurements of sound reflection and airborne sound insulation of noise barriers are usually done according to CEN/TS 1793-5. This method has been improved substantially during the EU funded QUIESST collaborative project. Within the same framework, an inter-laboratory test has

  13. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  14. Signal quality measures for unsupervised blood pressure measurement

    International Nuclear Information System (INIS)

    Abdul Sukor, J; Redmond, S J; Lovell, N H; Chan, G S H

    2012-01-01

    Accurate systolic and diastolic pressure estimation, using automated blood pressure measurement, is difficult to achieve when the transduced signals are contaminated with noise or interference, such as movement artifact. This study presents an algorithm for automated signal quality assessment in blood pressure measurement by determining the feasibility of accurately detecting systolic and diastolic pressures when corrupted with various levels of movement artifact. The performance of the proposed algorithm is compared to a manually annotated reference scoring (RS). Based on visual representations and audible playback of Korotkoff sounds, the creation of the RS involved two experts identifying sections of the recorded sounds and annotating sections of noise contamination. The experts determined the systolic and diastolic pressure in 100 recorded Korotkoff sound recordings, using a simultaneous electrocardiograph as a reference signal. The recorded Korotkoff sounds were acquired from 25 healthy subjects (16 men and 9 women) with a total of four measurements per subject. Two of these measurements contained purposely induced noise artifact caused by subject movement. Morphological changes in the cuff pressure signal and the width of the Korotkoff pulse were extracted features which were believed to be correlated with the noise presence in the recorded Korotkoff sounds. Verification of reliable Korotkoff pulses was also performed using extracted features from the oscillometric waveform as recorded from the inflatable cuff. The time between an identified noise section and a verified Korotkoff pulse was the key feature used to determine the validity of possible systolic and diastolic pressures in noise contaminated Korotkoff sounds. The performance of the algorithm was assessed based on the ability to: verify if a signal was contaminated with any noise; the accuracy, sensitivity and specificity of this noise classification, and the systolic and diastolic pressure

  15. Light aircraft sound transmission studies - Noise reduction model

    Science.gov (United States)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  16. Construction Of Critical Thinking Skills Test Instrument Related The Concept On Sound Wave

    Science.gov (United States)

    Mabruroh, F.; Suhandi, A.

    2017-02-01

    This study aimed to construct test instrument of critical thinking skills of high school students related the concept on sound wave. This research using a mixed methods with sequential exploratory design, consists of: 1) a preliminary study; 2) design and review of test instruments. The form of test instruments in essay questions, consist of 18 questions that was divided into 5 indicators and 8 sub-indicators of the critical thinking skills expressed by Ennis, with questions that are qualitative and contextual. Phases of preliminary study include: a) policy studies; b) survey to the school; c) and literature studies. Phases of the design and review of test instruments consist of two steps, namely a draft design of test instruments include: a) analysis of the depth of teaching materials; b) the selection of indicators and sub-indicators of critical thinking skills; c) analysis of indicators and sub-indicators of critical thinking skills; d) implementation of indicators and sub-indicators of critical thinking skills; and e) making the descriptions about the test instrument. In the next phase of the review test instruments, consist of: a) writing about the test instrument; b) validity test by experts; and c) revision of test instruments based on the validator.

  17. Tinnitus (Phantom Sound: Risk coming for future

    Directory of Open Access Journals (Sweden)

    Suresh Rewar

    2015-01-01

    Full Text Available The word 'tinnitus' comes from the Latin word tinnire, meaning “to ring” or “a ringing.” Tinnitus is the cognition of sound in the absence of any corresponding external sound. Tinnitus can take the form of continuous buzzing, hissing, or ringing, or a combination of these or other characteristics. Tinnitus affects 10% to 25% of the adult population. Tinnitus is classified as objective and subjective categories. Subjective tinnitus is meaningless sounds that are not associated with a physical sound and only the person who has the tinnitus can hear it. Objective tinnitus is the result of a sound that can be heard by the physician. Tinnitus is not a disease in itself but a common symptom, and because it involves the perception of sound or sounds, it is commonly associated with the hearing system. In fact, various parts of the hearing system, including the inner ear, are often responsible for this symptom. Tinnitus patients, which can lead to sleep disturbances, concentration problems, fatigue, depression, anxiety disorders, and sometimes even to suicide. The evaluation of tinnitus always begins with a thorough history and physical examination, with further testing performed when indicated. Diagnostic testing should include audiography, speech discrimination testing, computed tomography angiography, or magnetic resonance angiography should be performed. All patients with tinnitus can benefit from patient education and preventive measures, and oftentimes the physician's reassurance and assistance with the psychologic aftereffects of tinnitus can be the therapy most valuable to the patient. There are no specific medications for the treatment of tinnitus. Sedatives and some other medications may prove helpful in the early stages. The ultimate goal of neuro-imaging is to identify subtypes of tinnitus in order to better inform treatment strategies.

  18. Industry-Oriented Laboratory Development for Mixed-Signal IC Test Education

    Science.gov (United States)

    Hu, J.; Haffner, M.; Yoder, S.; Scott, M.; Reehal, G.; Ismail, M.

    2010-01-01

    The semiconductor industry is lacking qualified integrated circuit (IC) test engineers to serve in the field of mixed-signal electronics. The absence of mixed-signal IC test education at the collegiate level is cited as one of the main sources for this problem. In response to this situation, the Department of Electrical and Computer Engineering at…

  19. An X-ray Experiment with Two-Stage Korean Sounding Rocket

    Directory of Open Access Journals (Sweden)

    Uk-Won Nam

    1998-12-01

    Full Text Available The test result of the X-ray observation system is presented which have been developed at Korea Astronomy Observatory for 3 years (1995-1997. The instrument, which is composed of detector and signal processing parts, is designed for the future observations of compact X-ray sources. The performance of the instrument was tested by mounting on the two-stage Korean Sounding Rocket, which was launched from Taean rocket flight center on June 11 at 10:00 KST 1998. Telemetry data were received from individual parts of the instrument for 32 and 55.7 sec, respectively, since the launch of the rocket. In this paper, the result of the data analysis based on the telemetry data and discussion about the performance of the instrument is reported.

  20. Repeatability study of replicate crash tests: A signal analysis approach.

    Science.gov (United States)

    Seppi, Jeremy; Toczyski, Jacek; Crandall, Jeff R; Kerrigan, Jason

    2017-10-03

    To provide an objective basis on which to evaluate the repeatability of vehicle crash test methods, a recently developed signal analysis method was used to evaluate correlation of sensor time history data between replicate vehicle crash tests. The goal of this study was to evaluate the repeatability of rollover crash tests performed with the Dynamic Rollover Test System (DRoTS) relative to other vehicle crash test methods. Test data from DRoTS tests, deceleration rollover sled (DRS) tests, frontal crash tests, frontal offset crash tests, small overlap crash tests, small overlap impact (SOI) crash tests, and oblique crash tests were obtained from the literature and publicly available databases (the NHTSA vehicle database and the Insurance Institute for Highway Safety TechData) to examine crash test repeatability. Signal analysis of the DRoTS tests showed that force and deformation time histories had good to excellent repeatability, whereas vehicle kinematics showed only fair repeatability due to the vehicle mounting method for one pair of tests and slightly dissimilar mass properties (2.2%) in a second pair of tests. Relative to the DRS, the DRoTS tests showed very similar or higher levels of repeatability in nearly all vehicle kinematic data signals with the exception of global X' (road direction of travel) velocity and displacement due to the functionality of the DRoTS fixture. Based on the average overall scoring metric of the dominant acceleration, DRoTS was found to be as repeatable as all other crash tests analyzed. Vertical force measures showed good repeatability and were on par with frontal crash barrier forces. Dynamic deformation measures showed good to excellent repeatability as opposed to poor repeatability seen in SOI and oblique deformation measures. Using the signal analysis method as outlined in this article, the DRoTS was shown to have the same or better repeatability of crash test methods used in government regulatory and consumer evaluation test

  1. Deformation of a sound field caused by a manikin

    DEFF Research Database (Denmark)

    Weinrich, Søren G.

    1981-01-01

    around the head at distances of 1 cm to 2 m, measured from the tip of the nose. The signals were pure tones at 1, 2, 4, 6, 8, and 10 kHz. It was found that the presence of the manikin caused changes in the SPL of the sound field of at most ±2.5 dB at a distance of 1 m from the surface of the manikin....... Only over an interval of approximately 20 ° behind the manikin (i.e., opposite the sound source) did the manikin cause much larger changes, up to 9 dB. These changes are caused by destructive interference between sounds coming from opposite sides of the manikin. In front of the manikin, the changes...

  2. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  3. Sounding rockets explore the ionosphere

    International Nuclear Information System (INIS)

    Mendillo, M.

    1990-01-01

    It is suggested that small, expendable, solid-fuel rockets used to explore ionospheric plasma can offer insight into all the processes and complexities common to space plasma. NASA's sounding rocket program for ionospheric research focuses on the flight of instruments to measure parameters governing the natural state of the ionosphere. Parameters include input functions, such as photons, particles, and composition of the neutral atmosphere; resultant structures, such as electron and ion densities, temperatures and drifts; and emerging signals such as photons and electric and magnetic fields. Systematic study of the aurora is also conducted by these rockets, allowing sampling at relatively high spatial and temporal rates as well as investigation of parameters, such as energetic particle fluxes, not accessible to ground based systems. Recent active experiments in the ionosphere are discussed, and future sounding rocket missions are cited

  4. Method of test signal design for estimating the aircraft aerodynamic parameters

    Science.gov (United States)

    Belokon', S. A.; Zolotukhin, Yu. N.; Filippov, M. N.

    2017-07-01

    A method of test signal design is proposed for studying the aircraft aerodynamic characteristics with the use of the technology of dynamically scaled free-flight models. Simultaneous excitation of all input channels in a prescribed frequency band by a set of mutually orthogonal signals is applied to increase the efficiency. A modified method of calculating the set of mutually orthogonal sinusoidal signals with a small normalized peak factor is presented. Results of simulating the aircraft motion in the MATLAB/Simulink environment with the use of the developed method of test signal design are reported.

  5. Assessing and optimizing infra-sound networks to monitor volcanic eruptions

    International Nuclear Information System (INIS)

    Tailpied, Dorianne

    2016-01-01

    Understanding infra-sound signals is essential to monitor compliance with the Comprehensive Nuclear-Test ban Treaty, and also to demonstrate the potential of the global monitoring infra-sound network for civil and scientific applications. The main objective of this thesis is to develop a robust tool to estimate and optimize the performance of any infra-sound network to monitor explosive sources such as volcanic eruptions. Unlike previous studies, the developed method has the advantage to consider realistic atmospheric specifications along the propagation path, source frequency and noise levels at the stations. It allows to predict the attenuation and the minimum detectable source amplitude. By simulating the performances of any infra-sound networks, it is then possible to define the optimal configuration of the network to monitor a specific region, during a given period. When carefully adding a station to the existing network, performance can be improved by a factor of 2. However, it is not always possible to complete the network. A good knowledge of detection capabilities at large distances is thus essential. To provide a more realistic picture of the performance, we integrate the atmospheric longitudinal variability along the infra-sound propagation path in our simulations. This thesis also contributes in providing a confidence index taking into account the uncertainties related to propagation and atmospheric models. At high frequencies, the error can reach 40 dB. Volcanic eruptions are natural, powerful and valuable calibrating sources of infra-sound, worldwide detected. In this study, the well instrumented volcanoes Yasur, in Vanuatu, and Etna, in Italy, offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near-field recordings and far-field detections of these volcanoes have helped to highlight the potential of our simulation tool to remotely monitor volcanoes. Such work could significantly help to prevent

  6. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Science.gov (United States)

    Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M

    2012-01-01

    Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial

  7. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    Directory of Open Access Journals (Sweden)

    Jesse A Port

    Full Text Available Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp., γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used

  8. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  9. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  10. Advantages of the non-stationary approach: test on eddy current signals

    International Nuclear Information System (INIS)

    Brunel, P.

    1993-12-01

    Conventional signal processing is often unsuitable for the interpretation of intrinsically non-stationary signals, such as surveillance or non destructive testing signals. In these cases, ''advanced'' methods are required. This report presents two applications of non-stationary signal processing methods to the complex signals obtained in eddy current non destructive testing of steam generator tubes. The first application consists in segmenting the absolute channel, which can be likened to a piecewise constant signal. The Page-Hinkley cumulative sum algorithm is used, enabling detection of unknown mean amplitude jumps in a piecewise constant signal disturbed by a white noise. Results are comparable to those obtained with the empirical method currently in use. As easy to implement as the latter, the Page-Hinkley algorithm has the added advantage of being well formalized and of identifying whether the jumps in mean are positive or negative. The second application concerns assistance in detecting characteristic fault transients in the differential channels, using the continuous wavelet transform. The useful signal and noise spectra are fairly close, but not strictly identical. With the continuous wavelet transform, these frequency differences can be turned to account. The method was tested on synthetic signals obtained by summing noise and real defect signals. Using the continuous wavelet transform reduces the minimum signal-to-noise ratio by 5 dB for detection of a transient as compared with direct detection on the original signal. Finally, a summary of non-stationary methods using our data is presented. The two investigations described confirm that non-stationary methods may be considered as interesting signal and image analysis tools, as an efficient complement to conventional methods. (author). 24 figs., 13 refs

  11. Sound frequency and aural selectivity in sound-contingent visual motion aftereffect.

    Directory of Open Access Journals (Sweden)

    Maori Kobayashi

    Full Text Available BACKGROUND: One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears. METHODOLOGY/PRINCIPAL FINDINGS: Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear. CONCLUSIONS/SIGNIFICANCE: These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage.

  12. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  13. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  14. Generation of on-line test signals for nuclear instrumentation for PFBR

    International Nuclear Information System (INIS)

    Ram, Rajit; Bhatnagar, P.V.; Rajesh, M.G.; Das, Debashis

    2010-01-01

    Neutron flux monitoring system for PFBR employs pulse signal processing in start up and intermediate power range of reactor operation and Campbell signal processing in intermediate and full power range of reactor operation. Pulse signal processing unit as well as Campbell signal processing unit incorporates FPGA that generates pulse/white noise signal for on-line testing and diagnostic of the channels. In pulse channel fixed/linearly/exponentially varying pulse rate signal is generated over three decades of reactor operation. In the Campbell channel, Poisson distributed noise varying linearly/exponentially is generated over four decades of reactor operation. Multiple numbers of Poisson distributed random pulse trains are summed and amplified to get the white noise signal. Exponentially increasing gain pattern, generated by MATLAB is used to increase the RMS value of the generated noise. The paper discuses the successful testing and validation of pulse and Campbell channel by making use of the generated pulse/white noise signal over wide range of operation for nuclear instrumentation. (author)

  15. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Report: Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2

    Science.gov (United States)

    Platt, R.

    1999-01-01

    This is the Performance Verification Report, Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The specification establishes the requirements for the Comprehensive Performance Test (CPT) and Limited Performance Test (LPT) of the Advanced Microwave Sounding, Unit-A2 (AMSU-A2), referred to herein as the unit. The unit is defined on Drawing 1331200. 1.2 Test procedure sequence. The sequence in which the several phases of this test procedure shall take place is shown in Figure 1, but the sequence can be in any order.

  16. The influence of signal type on the internal auditory representation of a room

    Science.gov (United States)

    Teret, Elizabeth

    Currently, architectural acousticians make no real distinction between a room impulse response and the auditory system's internal representation of a room. With this lack of a good model for the auditory representation of a room, it is indirectly assumed that our internal representation of a room is independent of the sound source needed to make the room characteristics audible. The extent to which this assumption holds true is examined with perceptual tests. Listeners are presented with various pairs of signals (music, speech, and noise) convolved with synthesized impulse responses of different reverberation times. They are asked to adjust the reverberation of one of the signals to match the other. Analysis of the data show that the source signal significantly influences perceived reverberance. Listeners are less accurate when matching reverberation times of varied signals than they are with identical signals. Additional testing shows that perception of reverberation can be linked to the existence of transients in the signal.

  17. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  18. Heart sounds analysis using probability assessment

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

    2017-01-01

    Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

  19. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  20. Coherent Surface Clutter Suppression Techniques with Topography Estimation for Multi-Phase-Center Radar Ice Sounding

    DEFF Research Database (Denmark)

    Nielsen, Ulrik; Dall, Jørgen; Kristensen, Steen Savstrup

    2012-01-01

    Radar ice sounding enables measurement of the thickness and internal structures of the large ice sheets on Earth. Surface clutter masking the signal of interest is a major obstacle in ice sounding. Algorithms for surface clutter suppression based on multi-phase-center radars are presented. These ...

  1. Phonocardiography Signal Processing

    CERN Document Server

    Abbas, Abbas K

    2009-01-01

    The auscultation method is an important diagnostic indicator for hemodynamic anomalies. Heart sound classification and analysis play an important role in the auscultative diagnosis. The term phonocardiography refers to the tracing technique of heart sounds and the recording of cardiac acoustics vibration by means of a microphone-transducer. Therefore, understanding the nature and source of this signal is important to give us a tendency for developing a competent tool for further analysis and processing, in order to enhance and optimize cardiac clinical diagnostic approach. This book gives the

  2. Meaning From Environmental Sounds: Types of Signal-Referent Relations and Their Effect on Recognizing Auditory Icons

    Science.gov (United States)

    Keller, Peter; Stevens, Catherine

    2004-01-01

    This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…

  3. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  4. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  5. Neogene and Quaternary geology of a stratigraphic test hole on Horn Island, Mississippi Sound

    Science.gov (United States)

    Gohn, Gregory S.; Brewster-Wingard, G. Lynn; Cronin, Thomas M.; Edwards, Lucy E.; Gibson, Thomas G.; Rubin, Meyer; Willard, Debra A.

    1996-01-01

    During April and May, 1991, the U.S. Geological Survey (USGS) drilled a 510-ft-deep, continuously cored, stratigraphic test hole on Horn Island, Mississippi Sound, as part of a field study of the Neogene and Quaternary geology of the Mississippi coastal area. The USGS drilled two new holes at the Horn Island site. The first hole was continuously cored to a depth of 510 ft; coring stopped at this depth due to mechanical problems. To facilitate geophysical logging, an unsampled second hole was drilled to a depth of 519 ft at the same location.

  6. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  7. AUX: a scripting language for auditory signal processing and software packages for psychoacoustic experiments and education.

    Science.gov (United States)

    Kwon, Bomjun J

    2012-06-01

    This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community-both those with programming backgrounds and those without.

  8. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    Science.gov (United States)

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  9. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 108 2

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1 SIN 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  10. New perspectives on mechanisms of sound generation in songbirds

    DEFF Research Database (Denmark)

    Goller, Franz; Larsen, Ole Næsbye

    2002-01-01

    -tone mechanism similar to human phonation with the labia forming a pneumatic valve. The classical avian model proposed that vibrations of the thin medial tympaniform membranes are the primary sound generating mechanism. As a direct test of these two hypotheses we ablated the medial tympaniform membranes in two......The physical mechanisms of sound generation in the vocal organ, the syrinx, of songbirds have been investigated mostly with indirect methods. Recent direct endoscopic observation identified vibrations of the labia as the principal sound source. This model suggests sound generation in a pulse...... atmosphere) as well as direct (labial vibration during tonal sound) measurements of syringeal vibrations support a vibration-based soundgenerating mechanism even for tonal sounds....

  11. Silent oceans: ocean acidification impoverishes natural soundscapes by altering sound production of the world's noisiest marine invertebrate.

    Science.gov (United States)

    Rossi, Tullio; Connell, Sean D; Nagelkerken, Ivan

    2016-03-16

    Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. © 2016 The Author(s).

  12. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, 08

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  13. Experimental study on utilization of air-borne jet sound in coolant leak detector

    International Nuclear Information System (INIS)

    Hayamizu, Y.; Kitahara, T.; Hayashi, T.; Nishimura, M.

    1975-10-01

    Studies have been undertaken to develop a new coolant leak detection method by the use of a microphone to pick up jet sound generated when pressurized high temperature water is discharged from a pressure boundary into the atmosphere. Leakage was simulated in three shapes, such as two machine-made circular holes and longitudinal and transverse slits in an inlet tube of a blowdown test facility. The measured power level of the jet sound was in agreement with theoretical values calculated from Lighthill's equation. In the study of utilization, this new method has been confirmed as applicable, and to be calculated theoretically for design on 'signal to noise ratio' evaluation. Detection of a small coolant leakage of 1 kg/sec is possible in a recirculation pump room which has large background noise from the pump if a suitable isolation wall, such as hot boxes, is installed between the monitored pipes and the pump. (auth.)

  14. Full-Band Quasi-Harmonic Analysis and Synthesis of Musical Instrument Sounds with Adaptive Sinusoids

    Directory of Open Access Journals (Sweden)

    Marcelo Caetano

    2016-05-01

    Full Text Available Sinusoids are widely used to represent the oscillatory modes of musical instrument sounds in both analysis and synthesis. However, musical instrument sounds feature transients and instrumental noise that are poorly modeled with quasi-stationary sinusoids, requiring spectral decomposition and further dedicated modeling. In this work, we propose a full-band representation that fits sinusoids across the entire spectrum. We use the extended adaptive Quasi-Harmonic Model (eaQHM to iteratively estimate amplitude- and frequency-modulated (AM–FM sinusoids able to capture challenging features such as sharp attacks, transients, and instrumental noise. We use the signal-to-reconstruction-error ratio (SRER as the objective measure for the analysis and synthesis of 89 musical instrument sounds from different instrumental families. We compare against quasi-stationary sinusoids and exponentially damped sinusoids. First, we show that the SRER increases with adaptation in eaQHM. Then, we show that full-band modeling with eaQHM captures partials at the higher frequency end of the spectrum that are neglected by spectral decomposition. Finally, we demonstrate that a frame size equal to three periods of the fundamental frequency results in the highest SRER with AM–FM sinusoids from eaQHM. A listening test confirmed that the musical instrument sounds resynthesized from full-band analysis with eaQHM are virtually perceptually indistinguishable from the original recordings.

  15. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  16. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  17. Development of Optophone with No Diaphragm and Application to Sound Measurement in Jet Flow

    Directory of Open Access Journals (Sweden)

    Yoshito Sonoda

    2012-01-01

    Full Text Available The optophone with no diaphragm, which can detect sound waves without disturbing flow of air and sound field, is presented as a novel sound measurement technique and the present status of development is reviewed in this paper. The method is principally based on the Fourier optics and the sound signal is obtained by detecting ultrasmall diffraction light generated from phase modulation by sounds. The principle and theory, which have been originally developed as a plasma diagnostic technique to measure electron density fluctuations in the nuclear fusion research, are briefly introduced. Based on the theoretical analysis, property and merits as a wave-optical sound detection are presented, and the fundamental experiments and results obtained so far are reviewed. It is shown that sounds from about 100 Hz to 100 kHz can be simultaneously detected by a visible laser beam, and the method is very useful to sound measurement in aeroacoustics. Finally, present main problems of the optophone for practical uses in sound and/or noise measurements and the image of technology expected in the future are shortly shown.

  18. Visualization of the hot chocolate sound effect by spectrograms

    Science.gov (United States)

    Trávníček, Z.; Fedorchenko, A. I.; Pavelka, M.; Hrubý, J.

    2012-12-01

    We present an experimental and a theoretical analysis of the hot chocolate effect. The sound effect is evaluated using time-frequency signal processing, resulting in a quantitative visualization by spectrograms. This method allows us to capture the whole phenomenon, namely to quantify the dynamics of the rising pitch. A general form of the time dependence volume fraction of the bubbles is proposed. We show that the effect occurs due to the nonlinear dependence of the speed of sound in the gas/liquid mixture on the volume fraction of the bubbles and the nonlinear time dependence of the volume fraction of the bubbles.

  19. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  20. Tracheal sound parameters of respiratory cycle phases show differences between flow-limited and normal breathing during sleep

    International Nuclear Information System (INIS)

    Kulkas, A; Huupponen, E; Virkkala, J; Saastamoinen, A; Rauhala, E; Tenhunen, M; Himanen, S-L

    2010-01-01

    The objective of the present work was to develop new computational parameters to examine the characteristics of respiratory cycle phases from the tracheal breathing sound signal during sleep. Tracheal sound data from 14 patients (10 males and 4 females) were examined. From each patient, a 10 min long section of normal and a 10 min section of flow-limited breathing during sleep were analysed. The computationally determined proportional durations of the respiratory phases were first investigated. Moreover, the phase durations and breathing sound amplitude levels were used to calculate the area under the breathing sound envelope signal during inspiration and expiration phases. An inspiratory sound index was then developed to provide the percentage of this type of area during the inspiratory phase with respect to the combined area of inspiratory and expiratory phases. The proportional duration of the inspiratory phase showed statistically significantly higher values during flow-limited breathing than during normal breathing and inspiratory pause displayed an opposite difference. The inspiratory sound index showed statistically significantly higher values during flow-limited breathing than during normal breathing. The presented novel computational parameters could contribute to the examination of sleep-disordered breathing or as a screening tool

  1. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  2. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    Science.gov (United States)

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  3. Physics of thermo-acoustic sound generation

    Science.gov (United States)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  4. HIRS-AMTS satellite sounding system test - Theoretical and empirical vertical resolving power. [High resolution Infrared Radiation Sounder - Advanced Moisture and Temperature Sounder

    Science.gov (United States)

    Thompson, O. E.

    1982-01-01

    The present investigation is concerned with the vertical resolving power of satellite-borne temperature sounding instruments. Information is presented on the capabilities of the High Resolution Infrared Radiation Sounder (HIRS) and a proposed sounding instrument called the Advanced Moisture and Temperature Sounder (AMTS). Two quite different methods for assessing the vertical resolving power of satellite sounders are discussed. The first is the theoretical method of Conrath (1972) which was patterned after the work of Backus and Gilbert (1968) The Backus-Gilbert-Conrath (BGC) approach includes a formalism for deriving a retrieval algorithm for optimizing the vertical resolving power. However, a retrieval algorithm constructed in the BGC optimal fashion is not necessarily optimal as far as actual temperature retrievals are concerned. Thus, an independent criterion for vertical resolving power is discussed. The criterion is based on actual retrievals of signal structure in the temperature field.

  5. Monitoring of aquifer pump tests with Magnetic Resonance Sounding (MRS)

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Auken, Esben; Bauer-Gottwein, Peter

    2009-01-01

    Magnetic Resonance Sounding (MRS) can provide valuable data to constrain and calibrate groundwater flow and transport models. With this non-invasive geophysical technique, field measurements of water content and hydraulic conductivities can be obtained. We developed a hydrogeophyiscal forward...

  6. Human female orgasm as evolved signal: a test of two hypotheses.

    Science.gov (United States)

    Ellsworth, Ryan M; Bailey, Drew H

    2013-11-01

    We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.

  7. Is Einsteinian no-signalling violated in Bell tests?

    Science.gov (United States)

    Kupczynski, Marian

    2017-11-01

    Relativistic invariance is a physical law verified in several domains of physics. The impossibility of faster than light influences is not questioned by quantum theory. In quantum electrodynamics, in quantum field theory and in the standard model relativistic invariance is incorporated by construction. Quantum mechanics predicts strong long range correlations between outcomes of spin projection measurements performed in distant laboratories. In spite of these strong correlations marginal probability distributions should not depend on what was measured in the other laboratory what is called shortly: non-signalling. In several experiments, performed to test various Bell-type inequalities, some unexplained dependence of empirical marginal probability distributions on distant settings was observed. In this paper we demonstrate how a particular identification and selection procedure of paired distant outcomes is the most probable cause for this apparent violation of no-signalling principle. Thus this unexpected setting dependence does not prove the existence of superluminal influences and Einsteinian no-signalling principle has to be tested differently in dedicated experiments. We propose a detailed protocol telling how such experiments should be designed in order to be conclusive. We also explain how magical quantum correlations may be explained in a locally causal way.

  8. Is Einsteinian no-signalling violated in Bell tests?

    Directory of Open Access Journals (Sweden)

    Kupczynski Marian

    2017-11-01

    Full Text Available Relativistic invariance is a physical law verified in several domains of physics. The impossibility of faster than light influences is not questioned by quantum theory. In quantum electrodynamics, in quantum field theory and in the standard model relativistic invariance is incorporated by construction. Quantum mechanics predicts strong long range correlations between outcomes of spin projection measurements performed in distant laboratories. In spite of these strong correlations marginal probability distributions should not depend on what was measured in the other laboratory what is called shortly: non-signalling. In several experiments, performed to test various Bell-type inequalities, some unexplained dependence of empirical marginal probability distributions on distant settings was observed. In this paper we demonstrate how a particular identification and selection procedure of paired distant outcomes is the most probable cause for this apparent violation of no-signalling principle. Thus this unexpected setting dependence does not prove the existence of superluminal influences and Einsteinian no-signalling principle has to be tested differently in dedicated experiments. We propose a detailed protocol telling how such experiments should be designed in order to be conclusive. We also explain how magical quantum correlations may be explained in a locally causal way.

  9. MISSED: an environment for mixed-signal microsystem testing and diagnosis

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Docherty, G.

    1993-01-01

    A tight link between design and test data is proposed for speeding up test-pattern generation and diagnosis during mixed-signal prototype verification. Test requirements are already incorporated at the behavioral level and specified with increased detail at lower hierarchical levels. A strict

  10. Effect of Listening to the Al-Quran on Heart Sound

    Science.gov (United States)

    Daud, N. F.; Sharif, Z.

    2018-03-01

    This paper investigates the effect on the heart sounds upon listening to the chosen verses of the Al Quran. A signal of the heart sounds is extracted using Thinklabs Phonocardiography software and then the frequency components are extracted using MATLAB 7.11.0. Frequency components during diastolic are compared for two sessions; before and during listening sessions. Diastolic is a period where the chamber of the heart is filled with the blood when the heart muscle is in a relaxed condition. From this study, it is found that the frequency of the heart sound during listening to Al-Quran is lower than the one before listening to Al-Quran. This indicates that, the state of calmness can be achieved by listening to this selected verses of the Al-Quran.

  11. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  12. Using Employer Hiring Behavior to Test the Educational Signaling Hypothesis

    NARCIS (Netherlands)

    Albrecht, J.W.; van Ours, J.C.

    2001-01-01

    This paper presents a test of the educational signaling hypothesis.If employers use education as a signal in the hiring process, they will rely more on education when less is otherwise known about applicants.We nd that employers are more likely to lower educational standards when an informal, more

  13. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Tests of Dec 1999/Jan 2000 (S/O 784077, OC-454)

    Science.gov (United States)

    Heffner, R.

    2000-01-01

    This is the Engineering Test Report, AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Test of Dec 1999/Jan 2000 (S/O 784077, OC-454), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  14. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  15. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  16. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    Science.gov (United States)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  17. Effects of small variations of speed of sound in optoacoustic tomographic imaging

    International Nuclear Information System (INIS)

    Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel

    2014-01-01

    Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtained with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media

  18. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation. Beruehrungslose Pruefung von Beschichtungen mittels laserinduzierter Ultraschallanregung und holographischer Schallabbildung

    Energy Technology Data Exchange (ETDEWEB)

    Crostack, H A; Pohl, K Y [QZ-DO Qualitaetszentrum Dortmund GmbH und Co. KG (Germany); Radtke, U [Dortmund Univ. (Germany). Fachgebiet Qualitaetskontrolle

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO[sub 2] layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.).

  19. Non-stationarity of resonance signals from magnetospheric and ionospheric plasmas

    International Nuclear Information System (INIS)

    Higel, Bernard

    1975-01-01

    Rocket observations of resonance signals from ionospheric plasma were made during EIDI relaxation sounding experiments. It appeared that their amplitude, phase, and frequency characteristics are not stationary as a function of the receipt time. The measurement of these nonstationary signals increases the interest presented by resonance phenomena in spatial plasma diagnostics, but this measurement is not easy for frequency non-stationarities. A new method, entirely numerical, is proposed for automatic recognition of these signals. It will be used for the selecting and real-time processing of signals of the same type to be observed during relaxation sounding experiments on board of the futur GEOS satellite. In this method a statistical discrimination is done on the values taken by several parameters associated with the non-stationarities of the observed resonance signals [fr

  20. Using a Sound Field to Reduce the Risks of Bird-Strike: An Experimental Approach.

    Science.gov (United States)

    Swaddle, John P; Ingrassia, Nicole M

    2017-07-01

    Each year, billions of birds collide with large human-made structures, such as building, towers, and turbines, causing substantial mortality. Such bird-strike, which is projected to increase, poses risks to populations of birds and causes significant economic costs to many industries. Mitigation technologies have been deployed in an attempt to reduce bird-strike, but have been met with limited success. One reason for bird-strike may be that birds fail to pay adequate attention to the space directly in front of them when in level, cruising flight. A warning signal projected in front of a potential strike surface might attract visual attention and reduce the risks of collision. We tested this idea in captive zebra finches (Taeniopygia guttata) that were trained to fly down a long corridor and through an open wooden frame. Once birds were trained, they each experienced three treatments at unpredictable times and in a randomized order: a loud sound field projected immediately in front of the open wooden frame; a mist net (i.e., a benign strike surface) placed inside the wooden frame; and both the loud sound and the mist net. We found that birds slowed their flight approximately 20% more when the sound field was projected in front of the mist net compared with when the mist net was presented alone. This reduction in velocity would equate to a substantial reduction in the force of any collision. In addition to slowing down, birds increased the angle of attack of their body and tail, potentially allowing for more maneuverable flight. Concomitantly, the only cases where birds avoided the mist net occurred in the sound-augmented treatment. Interestingly, the sound field by itself did not demonstrably alter flight. Although our study was conducted in a limited setting, the alterations of flight associated with our sound field has implications for reducing bird-strike in nature and we encourage researchers to test our ideas in field trials. © The Author 2017. Published by

  1. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  2. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  3. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  4. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  5. Sound level measurements using smartphone "apps": Useful or inaccurate?

    Directory of Open Access Journals (Sweden)

    Daniel R Nast

    2014-01-01

    Full Text Available Many recreational activities are accompanied by loud concurrent sounds and decisions regarding the hearing hazards associated with these activities depend on accurate sound measurements. Sound level meters (SLMs are designed for this purpose, but these are technical instruments that are not typically available in recreational settings and require training to use properly. Mobile technology has made such sound level measurements more feasible for even inexperienced users. Here, we assessed the accuracy of sound level measurements made using five mobile phone applications or "apps" on an Apple iPhone 4S, one of the most widely used mobile phones. Accuracy was assessed by comparing application-based measurements to measurements made using a calibrated SLM. Whereas most apps erred by reporting higher sound levels, one application measured levels within 5 dB of a calibrated SLM across all frequencies tested.

  6. Statistical Signal Processing by Using the Higher-Order Correlation between Sound and Vibration and Its Application to Fault Detection of Rotational Machine

    Directory of Open Access Journals (Sweden)

    Hisako Masuike

    2008-01-01

    Full Text Available In this study, a stochastic diagnosis method based on the changing information of not only a linear correlation but also a higher-order nonlinear correlation is proposed in a form suitable for online signal processing in time domain by using a personal computer, especially in order to find minutely the mutual relationship between sound and vibration emitted from rotational machines. More specifically, a conditional probability hierarchically reflecting various types of correlation information is theoretically derived by introducing an expression on the multidimensional probability distribution in orthogonal expansion series form. The effectiveness of the proposed theory is experimentally confirmed by applying it to the observed data emitted from a rotational machine driven by an electric motor.

  7. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  8. Generation of Long-time Complex Signals for Testing the Instruments for Detection of Voltage Quality Disturbances

    Science.gov (United States)

    Živanović, Dragan; Simić, Milan; Kokolanski, Zivko; Denić, Dragan; Dimcev, Vladimir

    2018-04-01

    Software supported procedure for generation of long-time complex test sentences, suitable for testing the instruments for detection of standard voltage quality (VQ) disturbances is presented in this paper. This solution for test signal generation includes significant improvements of computer-based signal generator presented and described in the previously published paper [1]. The generator is based on virtual instrumentation software for defining the basic signal parameters, data acquisition card NI 6343, and power amplifier for amplification of output voltage level to the nominal RMS voltage value of 230 V. Definition of basic signal parameters in LabVIEW application software is supported using Script files, which allows simple repetition of specific test signals and combination of more different test sequences in the complex composite test waveform. The basic advantage of this generator compared to the similar solutions for signal generation is the possibility for long-time test sequence generation according to predefined complex test scenarios, including various combinations of VQ disturbances defined in accordance with the European standard EN50160. Experimental verification of the presented signal generator capability is performed by testing the commercial power quality analyzer Fluke 435 Series II. In this paper are shown some characteristic complex test signals with various disturbances and logged data obtained from the tested power quality analyzer.

  9. Realtime synthesized sword-sounds in Wii computer games

    DEFF Research Database (Denmark)

    Böttcher, Niels

    This paper presents the current work carried out on an interactive sword fighting game, developed for the Wii controller. The aim of the work is to develop highly interactive action-sound, which is closely mapped to the physical actions of the player. The interactive sword sound is developed using...... a combination of granular synthesis and subtractive synthesis simulating wind. The aim of the work is to test if more interactive sound can affect the way humans interact physically with their body, when playing games with controllers such as the Wii remote....

  10. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  11. Leading edge effect in laminar boundary layer excitation by sound

    International Nuclear Information System (INIS)

    Leehey, P.; Shapiro, P.

    1980-01-01

    Essentially plane pure tone sound waves were directed downstream over a heavily damped smooth flat plate installed in a low turbulence (0.04%) subsonic wind tunnel. Laminar boundary layer disturbance growth rates were measured with and without sound excitation and compared with numerical results from spatial stability theory. The data indicate that the sound field and Tollmien-Schlichting (T-S) waves coexist with comparable amplitudes when the latter are damped; moreover, the response is linear. Higher early growth rates occur for excitation by sound than by stream turbulence. Theoretical considerations indicate that the boundary layer is receptive to sound excitation primarily at the test plate leading edge. (orig.)

  12. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  13. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  14. The Process of Optimizing Mechanical Sound Quality in Product Design

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Holst, Thomas

    2011-01-01

    The research field concerning optimizing product sound quality is a relatively unexplored area, and may become difficult for designers to operate in. To some degree, sound is a highly subjective parameter, which is normally targeted sound specialists. This paper describes the theoretical...... and practical background for managing a process of optimizing the mechanical sound quality in a product design by using simple tools and workshops systematically. The procedure is illustrated by a case study of a computer navigation tool (computer mouse or mouse). The process is divided into 4 phases, which...... clarify the importance of product sound, defining perceptive demands identified by users, and, finally, how to suggest mechanical principles for modification of an existing sound design. The optimized mechanical sound design is followed by tests on users of the product in its use context. The result...

  15. Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.

    Science.gov (United States)

    Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro

    2018-03-09

    Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.

  16. Augmenting the Sound Experience at Music Festivals using Mobile Phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Stopczynski, Arkadiusz; Larsen, Jan

    2011-01-01

    In this paper we describe experiments carried out at the Nibe music festival in Denmark involving the use of mobile phones to augment the participants' sound experience at the concerts. The experiments involved N=19 test participants that used a mobile phone with a headset playing back sound...... “in-the-wild” experiments augmenting the sound experience at two concerts at this music festival....

  17. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  18. Sound-contingent visual motion aftereffect

    Directory of Open Access Journals (Sweden)

    Kobayashi Maori

    2011-05-01

    Full Text Available Abstract Background After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion, one of the signals (color becomes a driver for the other signal (motion. This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound. Results Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days. Conclusions These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.

  19. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  20. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  1. A stethoscope with wavelet separation of cardiac and respiratory sounds for real time telemedicine implemented on field-programmable gate array

    Science.gov (United States)

    Castro, Víctor M.; Muñoz, Nestor A.; Salazar, Antonio J.

    2015-01-01

    Auscultation is one of the most utilized physical examination procedures for listening to lung, heart and intestinal sounds during routine consults and emergencies. Heart and lung sounds overlap in the thorax. An algorithm was used to separate them based on the discrete wavelet transform with multi-resolution analysis, which decomposes the signal into approximations and details. The algorithm was implemented in software and in hardware to achieve real-time signal separation. The heart signal was found in detail eight and the lung signal in approximation six. The hardware was used to separate the signals with a delay of 256 ms. Sending wavelet decomposition data - instead of the separated full signa - allows telemedicine applications to function in real time over low-bandwidth communication channels.

  2. Free Flight Ground Testing of ADEPT in Advance of the Sounding Rocket One Flight Experiment

    Science.gov (United States)

    Smith, B. P.; Dutta, S.

    2017-01-01

    The Adaptable Deployable Entry and Placement Technology (ADEPT) project will be conducting the first flight test of ADEPT, titled Sounding Rocket One (SR-1), in just two months. The need for this flight test stems from the fact that ADEPT's supersonic dynamic stability has not yet been characterized. The SR-1 flight test will provide critical data describing the flight mechanics of ADEPT in ballistic flight. These data will feed decision making on future ADEPT mission designs. This presentation will describe the SR-1 scientific data products, possible flight test outcomes, and the implications of those outcomes on future ADEPT development. In addition, this presentation will describe free-flight ground testing performed in advance of the flight test. A subsonic flight dynamics test conducted at the Vertical Spin Tunnel located at NASA Langley Research Center provided subsonic flight dynamics data at high and low altitudes for multiple center of mass (CoM) locations. A ballistic range test at the Hypervelocity Free Flight Aerodynamics Facility (HFFAF) located at NASA Ames Research Center provided supersonic flight dynamics data at low supersonic Mach numbers. Execution and outcomes of these tests will be discussed. Finally, a hypothesized trajectory estimate for the SR-1 flight will be presented.

  3. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  4. The relationship between target quality and interference in sound zones

    DEFF Research Database (Denmark)

    Baykaner, Khan; Coleman, Phillip; Mason, Russell

    2015-01-01

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced...... audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity...

  5. Experiments on the use of sound as a fish deterrent

    International Nuclear Information System (INIS)

    Turnpenny, A.W.H.; Thatcher, K.P.; Wood, R.; Loeffelman, P.H.

    1993-01-01

    This report describes a series of experimental studies into the potential use of acoustic stimuli to deter fish from water intakes at thermal and hydroelectric power stations. The aim was to enlarge the range of candidate signals for testing, and to apply these in more rigorous laboratory trials and to a wider range of estuarine and marine fish species than was possible in previous initial preliminary studies. The trials were also required to investigate the degree to which fish might become habituated to the sound signals, consequently reducing their effectiveness. The species of fish which were of interest in this study were the Atlantic salmon (Salmo salar), sea trout (Salmo trutta), the shads (Alosa fallax, A. alosa), the European eel (Anguilla anguilla), bass (Dicentrarchus labrax), herring (Clupea harengus), whiting (Merlangius merlangus) and cod (Gadus morhua). All of these species are considered to be of conservation and/or commercial importance in Britain today and are potentially vulnerable to capture by nuclear, fossil-fuelled and tidal generating stations. Based on the effectiveness of the signals observed in these trials, a properly developed and sited acoustic fish deterrent system is expected to reduce fish impingement significantly at water intakes. Field trials at an estuarine power station are recommended. (author)

  6. [Courtship behavior, communicative sound production and resistance to stress in Drosophila mutants with defective agnostic gene, coding for LIMK1].

    Science.gov (United States)

    Popov, A V; Kaminskaia, A N; Savvateeva-Popova, E V

    2009-01-01

    To elucidate the role of one of the main elements of signal cascade of actin remodeling--LIM-kinase 1 (LIMK1)--in the control of animal behavior we studied the characteristics of courtship behavior, parameters of acoustic communicative signals and their resistance to heat shock (HS, 37 degrees C, 30 min) in Drosophila melanogaster males from the strain with mutation in locus agnostic (agn(ts3)) containing gene CG1848 for LIMK1. The data obtained was compared with the results of our previous similar investigation on wild type CS males (Popov et al., 2006). Flies were divided into 4 groups. The males of control groups were not subjected to heat shock. The rest of males were subjected to heat shock either at the beginning of larval development when predominantly mushroom body neuroblasts are dividing (groups HS1), or at the prepupal stage when the brain central complex is developing (groups HS2), or at the imago stage one hour before the test (groups HS3). All males were tested at the age of 5 days. Virgin and fertilized CS females were used as courtship objects. Comparison of control groups of the two strains--CS and agnostic--have shown that the mutation agn(ts3) has no influence on the main parameters of courtship behavior of intact (not subjected to HS) males (courtship latency, the rapidity of achieving copulation, courtship efficiency) but leads to lowering of their sexual activity, increase of duration of sound trains in the songs and to slight increase of rate and stability of working of singing pacemakers. Agnostic males in comparison to wild type males are more resistant to HS given 1 hour before the test. After HS their courtship intensity does not decrease and the main parameters of their courtship behavior and communicative sound signals in comparison tu wild type males either do not change, or appear to be even better stabilized. The frequency of distorted sound pulses (an indicator of frequency of impairments in the activity pattern of neuro

  7. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  8. Emission of sound from the mammalian inner ear

    Science.gov (United States)

    Reichenbach, Tobias; Stefanovic, Aleksandra; Nin, Fumiaki; Hudspeth, A. J.

    2013-03-01

    The mammalian inner ear, or cochlea, not only acts as a detector of sound but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the mechanical active process that sensitizes the cochlea and sharpens its frequency discrimination. It remains uncertain how these signals propagate back to the middle ear, from which they are emitted as sound. Although reverse propagation might occur through waves on the cochlear basilar membrane, experiments suggest the existence of a second component in otoacoustic emissions. We have combined theoretical and experimental studies to show that mechanical signals can also be transmitted by waves on Reissner's membrane, a second elastic structure within the cochea. We have developed a theoretical description of wave propagation on the parallel Reissner's and basilar membranes and its role in the emission of distortion products. By scanning laser interferometry we have measured traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emission. T. R. holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund; A. J. H. is an Investigator of Howard Hughes Medical Institute.

  9. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  10. Computer soundcard as an AC signal generator and oscilloscope for the physics laboratory

    Science.gov (United States)

    Sinlapanuntakul, Jinda; Kijamnajsuk, Puchong; Jetjamnong, Chanthawut; Chotikaprakhan, Sutharat

    2018-01-01

    The purpose of this paper is to develop both an AC signal generator and a dual-channel oscilloscope based on standard personal computer equipped with sound card as parts of the laboratory of the fundamental physics and the introduction to electronics classes. The setup turns the computer into the two channel measured device which can provides sample rate, simultaneous sampling, frequency range, filters and others essential capabilities required to perform amplitude, phase and frequency measurements of AC signal. The AC signal also generate from the same computer sound card output simultaneously in any waveform such as sine, square, triangle, saw-toothed pulsed, swept sine and white noise etc. These can convert an inexpensive PC sound card into powerful device, which allows the students to measure physical phenomena with their own PCs either at home or at university attendance. A graphic user interface software was developed for control and analysis, including facilities for data recording, signal processing and real time measurement display. The result is expanded utility of self-learning for the students in the field of electronics both AC and DC circuits, including the sound and vibration experiments.

  11. MO-FG-BRA-02: A Feasibility Study of Integrating Breathing Audio Signal with Surface Surrogates for Respiratory Motion Management

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Y; Zhu, X; Zheng, D; Li, S; Ma, R; Zhang, M; Fan, Q; Wang, X; Verma, V; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States)

    2016-06-15

    Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallow and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.

  12. MO-FG-BRA-02: A Feasibility Study of Integrating Breathing Audio Signal with Surface Surrogates for Respiratory Motion Management

    International Nuclear Information System (INIS)

    Lei, Y; Zhu, X; Zheng, D; Li, S; Ma, R; Zhang, M; Fan, Q; Wang, X; Verma, V; Zhou, S; Tang, X

    2016-01-01

    Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallow and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.

  13. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  14. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  15. A review of intelligent systems for heart sound signal analysis.

    Science.gov (United States)

    Nabih-Ali, Mohammed; El-Dahshan, El-Sayed A; Yahia, Ashraf S

    2017-10-01

    Intelligent computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. CAD systems could provide physicians with a suggestion about the diagnostic of heart diseases. The objective of this paper is to review the recent published preprocessing, feature extraction and classification techniques and their state of the art of phonocardiogram (PCG) signal analysis. Published literature reviewed in this paper shows the potential of machine learning techniques as a design tool in PCG CAD systems and reveals that the CAD systems for PCG signal analysis are still an open problem. Related studies are compared to their datasets, feature extraction techniques and the classifiers they used. Current achievements and limitations in developing CAD systems for PCG signal analysis using machine learning techniques are presented and discussed. In the light of this review, a number of future research directions for PCG signal analysis are provided.

  16. Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.

    Science.gov (United States)

    Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D

    2016-01-01

    To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.

  17. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  18. Behavioral responses by Icelandic White-Beaked Dolphins (Lagenorhynchus albirostris) to playback sounds

    DEFF Research Database (Denmark)

    Rasmussen, Marianne H.; Atem, Ana; Miller, Lee A.

    2016-01-01

    AbstractThe aim of this study was to investigate how wild white-beaked dolphins (Lagenorhynchus albirostris)respond to the playback of novel, anthropogenic sounds. We used amplitude-modulated tones and synthetic pulse-bursts. (Some authors in the literature use the term “burst pulse” meaning a bu...... a response and a change in the natural behavior of a marine mammal—in this case, wild white-beaked dolphins........ The estimated received levels for tonal signals were from 110 to 160 dB and for pulse-bursts were 153 to 166 dB re 1 μPa (peak-to-peak). Playback of a file with no signal served as a no sound control in all experiments. The animals responded to all acoustic signals with nine different behavioral responses: (1......) circling the array, (2) turning around and approaching the camera, (3)underwater tail slapping, (4)emitting bubbles, (5)turning their belly towards the set-up, (6) emitting pulse-bursts towards the loudspeaker, (7) an increase in swim speed, (8) a change in swim direction, and (9) jumping. A total of 157...

  19. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  20. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  1. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  2. From the Bob/Kirk effect to the Benoit/Éric effect: Testing the mechanism of name sound symbolism in two languages.

    Science.gov (United States)

    Sidhu, David M; Pexman, Penny M; Saint-Aubin, Jean

    2016-09-01

    Although it is often assumed that language involves an arbitrary relationship between form and meaning, many studies have demonstrated that nonwords like maluma are associated with round shapes, while nonwords like takete are associated with sharp shapes (i.e., the Maluma/Takete effect, Köhler, 1929/1947). The majority of the research on sound symbolism has used nonwords, but Sidhu and Pexman (2015) recently extended this effect to existing labels: real English first names (i.e., the Bob/Kirk effect). In the present research we tested whether the effects of name sound symbolism generalize to French speakers (Experiment 1) and French names (Experiment 2). In addition, we assessed the underlying mechanism of name sound symbolism, investigating the roles of phonology and orthography in the effect. Results showed that name sound symbolism does generalize to French speakers and French names. Further, this robust effect remained the same when names were presented in a curved vs. angular font (Experiment 3), or when the salience of orthographic information was reduced through auditory presentation (Experiment 4). Together these results suggest that the Bob/Kirk effect is pervasive, and that it is based on fundamental features of name phonemes. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  4. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  5. Recycling ceramic industry wastes in sound absorbing materials

    Directory of Open Access Journals (Sweden)

    C. Arenas

    2016-10-01

    Full Text Available The scope of this investigation is to develop a material mainly composed (80% w/w of ceramic wastes that can be applied in the manufacture of road traffic noise reducing devices. The characterization of the product has been carried out attending to its acoustic, physical and mechanical properties, by measuring the sound absorption coefficient at normal incidence, the open void ratio, density and compressive strength. Since the sound absorbing behavior of a porous material is related to the size of the pores and the thickness of the specimen tested, the influence of the particle grain size of the ceramic waste and the thickness of the samples tested on the properties of the final product has been analyzed. The results obtained have been compared to a porous concrete made of crushed granite aggregate as a reference commercial material traditionally used in similar applications. Compositions with coarse particles showed greater sound absorption properties than compositions made with finer particles, besides presenting better sound absorption behavior than the reference porous concrete. Therefore, a ceramic waste-based porous concrete can be potentially recycled in the highway noise barriers field.

  6. Puget Sound Tidal Energy In-Water Testing and Development Project Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Collar, Craig W

    2012-11-16

    Tidal energy represents potential for the generation of renewable, emission free, environmentally benign, and cost effective energy from tidal flows. A successful tidal energy demonstration project in Puget Sound, Washington may enable significant commercial development resulting in important benefits for the northwest region and the nation. This project promoted the United States Department of Energy's Wind and Hydropower Technologies Program's goals of advancing the commercial viability, cost-competitiveness, and market acceptance of marine hydrokinetic systems. The objective of the Puget Sound Tidal Energy Demonstration Project is to conduct in-water testing and evaluation of tidal energy technology as a first step toward potential construction of a commercial-scale tidal energy power plant. The specific goal of the project phase covered by this award was to conduct all activities necessary to complete engineering design and obtain construction approvals for a pilot demonstration plant in the Admiralty Inlet region of the Puget Sound. Public Utility District No. 1 of Snohomish County (The District) accomplished the objectives of this award through four tasks: Detailed Admiralty Inlet Site Studies, Plant Design and Construction Planning, Environmental and Regulatory Activities, and Management and Reporting. Pre-Installation studies completed under this award provided invaluable data used for site selection, environmental evaluation and permitting, plant design, and construction planning. However, these data gathering efforts are not only important to the Admiralty Inlet pilot project. Lessons learned, in particular environmental data gathering methods, can be applied to future tidal energy projects in the United States and other parts of the world. The District collaborated extensively with project stakeholders to complete the tasks for this award. This included Federal, State, and local government agencies, tribal governments, environmental groups, and

  7. Contemporary methods for realization and estimation of efficiency of 3Daudio technology application for sound interface improvement of an aircraft cabin

    Directory of Open Access Journals (Sweden)

    O. N. Korsun

    2014-01-01

    Full Text Available High information load of crew is one of the main problems of modern piloted aircraft therefore researches on approving data representation form, especially in critical situations are a challenge. The article considers one of opportunities to improve the interface of a modern pilot's cabin i.e. to use a spatial sound (3D - audio technology. The 3D - audio is a technology, which recreates a spatially directed sound in earphones or via loudspeakers. Spatial audio-helps, which together with information on danger will specify also the direction from which it proceeds, can reduce time of response to an event and, therefore, increase situational safety of flight. It is supposed that helps will be provided through pilot's headset therefore technology realization via earphones is discussed.Now the main hypothesis explaining the human ability to recognize the position of a sound source in space, asserts that the human estimates distortion of a sound signal spectrum at interaction with the head and an auricle depending on an arrangement of the sound source. For exact describing the signal spectrum variations there are such concepts as Head Related Impulse Response (HRIR and Head Related Transfer Function (HRTF. HRIR is measured in humans or dummies. At present the most full-scale public HRIR library is CIPIC HRTF Database of CIPIC Interface Laboratory at UC Davis.To have 3D audio effect, it is necessary to simulate a mono-signal conversion through the linear digital filters with anthropodependent pulse characteristics (HRIR for the left and right ear, which correspond to the chosen direction. Results should be united in a stereo file and applied for reproduction to the earphones.This scheme was realized in Matlab, and the received software was used for experiments to estimate the quantitative characteristics of technology. For processing and subsequent experiments the following sound signals were chosen: a fragment of the classical music piece "Polovetsky

  8. Sound absorption study on acoustic panel from kapok fiber and egg tray

    Science.gov (United States)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  9. Hearing Loss Signals Need for Diagnosis

    Science.gov (United States)

    ... Products For Consumers Home For Consumers Consumer Updates Hearing Loss Signals Need for Diagnosis Share Tweet Linkedin ... you’re talking loudly? Thinking about ordering a hearing aid or sound amplifier from a magazine or ...

  10. Development of a Student-Centered Instrument to Assess Middle School Students' Conceptual Understanding of Sound

    Science.gov (United States)

    Eshach, Haim

    2014-01-01

    This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound…

  11. SOUND-SPEED TOMOGRAPHY USING FIRST-ARRIVAL TRANSMISSION ULTRASOUND FOR A RING ARRAY

    Energy Technology Data Exchange (ETDEWEB)

    HUANG, LIANJIE [Los Alamos National Laboratory; QUAN, YOULI [Los Alamos National Laboratory

    2007-01-31

    Sound-speed tomography images can be used for cancer detection and diagnosis. Tumors have generally higher sound speeds than the surrounding tissue. Quality and resolution of tomography images are primarily determined by the insonification/illumination aperture of ultrasound and the capability of the tomography method for accurately handling heterogeneous nature of the breast. We investigate the capability of an efficient time-of-flight tomography method using transmission ultrasound from a ring array for reconstructing sound-speed images of the breast. The method uses first arrival times of transmitted ultrasonic signals emerging from non-beamforming ultrasound transducers located around a ring. It properly accounts for ray bending within the breast by solving the eikonal equation using a finite-difference scheme. We test and validate the time-of-flight transmission tomography method using synthetic data for numerical breast phantoms containing various objects. In our simulation, the objects are immersed in water within a ring array. Two-dimensional synthetic data are generated using a finite-difference scheme to solve acoustic-wave equation in heterogeneous media. We study the reconstruction accuracy of the tomography method for objects with different sizes and shapes as well as different perturbations from the surrounding medium. In addition, we also address some specific data processing issues related to the tomography. Our tomography results demonstrate that the first-arrival transmission tomography method can accurately reconstruct objects larger than approximately five wavelengths of the incident ultrasound using a ring array.

  12. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  13. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  14. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  15. Brief report: sound output of infant humidifiers.

    Science.gov (United States)

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  16. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  17. Detecting interferences with iOS applications to measure speed of sound

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-01-01

    Traditional experiments measuring the speed of sound consist of studying harmonics by changing the length of a glass tube closed at one end. In these experiments, the sound source and observer are outside of the tube. In this paper, we propose the modification of this old experiment by studying destructive interference in a pipe using a headset, iPhone and iPad. The iPhone is used as an emitter with signal generator application and the iPad is used as the receiver with a spectrogram application. Two experiments are carried out for measures: the emitter inside of the tube with the receiver outside, and vice versa. We conclude that it is even possible to adequately and easily measure the speed of sound using a cup or a can of coke with the method described in this paper.

  18. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  19. SIBYLLE: an expert system for the interpretation in real time of mono-dimensional signals; application to vocal signal

    International Nuclear Information System (INIS)

    Minault, Sophie

    1987-01-01

    This report presents an interactive tool for computer aided building of signals processing and interpretation systems. This tool includes three main parts: - an expert system, - a rule compiler, - a real time procedural system. The expert system allows the acquisition of knowledge about the signal. Knowledge has to be formalized as a set of rewriting rules (or syntaxical rules) and is introduced with an interactive interface. The compiler makes a compilation of the knowledge base (the set of rules) and generates a procedural system, which is equivalent to the expert system. The generated procedural system is a fixed one but is much faster than the expert system: it can work in real time. The expert system is used along the experimental phase on a small corpus of data: the knowledge base is then tested and possibly modified thanks to the interactive interface. Once the knowledge base is steady enough, the procedural system is generated and tested on a bigger data corpus. This allows to perform significant statistical studies which generally induce some corrections at the expert system level. The overall constitutes a tool which conciliates the expert systems flexibility with the procedural systems speed. It has been used for building a set of recognition rules modules on vocal signal - module of sound-silence detection - module of voiced-unvoiced segmentation - module of synchronous pitch detection. Its possibilities are not limited to the study of vocal signal, but can be enlarged to any mono-dimensional signal processing. A feasibility study has been realised for an electrocardiograms application. (author) [fr

  20. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  1. Bubbles that Change the Speed of Sound

    Science.gov (United States)

    Planinšič, Gorazd; Etkina, Eugenia

    2012-11-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."2 In this paper we describe a simple and robust experiment that allows an easy audio and visual demonstration of the same effect (unfortunately without the chocolate) and offers several possibilities for student investigations. In addition to the demonstration of the above effect, the experiments described below provide an excellent opportunity for students to devise and test explanations with simple equipment.

  2. Basic live sound reinforcement a practical guide for starting live audio

    CERN Document Server

    Biederman, Raven

    2013-01-01

    Access and interpret manufacturer spec information, find shortcuts for plotting measure and test equations, and learn how to begin your journey towards becoming a live sound professional. Land and perform your first live sound gigs with this guide that gives you just the right amount of information. Don't get bogged down in details intended for complex and expensive equipment and Madison Square Garden-sized venues. Basic Live Sound Reinforcement is a handbook for audio engineers and live sound enthusiasts performing in small venues from one-mike coffee shops to clubs. With their combined ye

  3. Transformer sound level caused by core magnetostriction and winding stress displacement variation

    Directory of Open Access Journals (Sweden)

    Chang-Hung Hsu

    2017-05-01

    Full Text Available Magnetostriction caused by the exciting variation of the magnetic core and the current conducted by the winding wired to the core has a significant result impact on a power transformer. This paper presents the sound of a factory transformer before on-site delivery for no-load tests. This paper also discusses the winding characteristics from the transformer full-load tests. The simulation and the measurement for several transformers with capacities ranging from 15 to 60 MVA and high voltage 132kV to low voltage 33 kV are performed. This study compares the sound levels for transformers by no-load test (core/magnetostriction and full-load test (winding/displacement ε. The difference between the simulated and the measured sound levels is about 3dB. The results show that the sound level depends on several parameters, including winding displacement, capacity, mass of the core and windings. Comparative results of magnetic induction of cores and the electromagnetic force of windings for no-load and full-load conditions are examined.

  4. Incidental learning of sound categories is impaired in developmental dyslexia.

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights

  5. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  6. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  7. Sound production to electric discharge: sonic muscle evolution in progress in Synodontis spp. catfishes (Mochokidae).

    Science.gov (United States)

    Boyle, Kelly S; Colleye, Orphal; Parmentier, Eric

    2014-09-22

    Elucidating the origins of complex biological structures has been one of the major challenges of evolutionary studies. Within vertebrates, the capacity to produce regular coordinated electric organ discharges (EODs) has evolved independently in different fish lineages. Intermediate stages, however, are not known. We show that, within a single catfish genus, some species are able to produce sounds, electric discharges or both signals (though not simultaneously). We highlight that both acoustic and electric communication result from actions of the same muscle. In parallel to their abilities, the studied species show different degrees of myofibril development in the sonic and electric muscle. The lowest myofibril density was observed in Synodontis nigriventris, which produced EODs but no swim bladder sounds, whereas the greatest myofibril density was observed in Synodontis grandiops, the species that produced the longest sound trains but did not emit EODs. Additionally, S. grandiops exhibited the lowest auditory thresholds. Swim bladder sounds were similar among species, while EODs were distinctive at the species level. We hypothesize that communication with conspecifics favoured the development of species-specific EOD signals and suggest an evolutionary explanation for the transition from a fast sonic muscle to electrocytes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  8. Sensory augmentation: integration of an auditory compass signal into human perception of space

    Science.gov (United States)

    Schumann, Frank; O’Regan, J. Kevin

    2017-01-01

    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences. PMID:28195187

  9. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  10. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  11. Sound at the zoo: Using animal monitoring, sound measurement, and noise reduction in zoo animal management.

    Science.gov (United States)

    Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D

    2017-05-01

    A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.

  12. Long Range Sound Propagation over Sea: Application to Wind Turbine Noise

    Energy Technology Data Exchange (ETDEWEB)

    Boue, Matieu

    2007-12-13

    Oeland, an array of 8 microphones created an acoustical antenna directed towards the sound sources. Wind and temperature data was measured at the source location and during one measurement period (June 2005), wind and temperature profiles were also mapped in the reception area. In order to increase the signal to noise ratio different signal enhancement methods were tested including a Kalman Filter technique and periodic time-averaging. The most accurate results were obtained by combining the Kalman Filter model with a Fast Fourier Transform (FFT). Sound pressure levels as low as a few dB could be detected by using this algorithm. The final results expressed as a transmission loss ('damping in sound pressure level corrected for the atmospheric damping') between the source and the receiver, have been compared to simultaneously measured wind and temperature profiles. The transmission loss data have also been expressed as statistical distributions from which e.g. the average value can be obtained. This average, based on data for the summer period June 2005/2006, has been compared with the Swedish Environmental Protection Agency recommendation. It is found that the breaking point for cylindrical propagation is close to 700 m instead of the 200 m assumed in the recommendation. This is a significant difference and it shows that probably the Swedish recommendation uses a too small value for the expected breaking point. Of course in general the value of the breaking point can depend on the location and for which part of the year one takes the average. How large the variation can be due to such factors is today still unknown. Here only more measurements and perhaps simulations combined with the wind data base available in Sweden can provide an answer.

  13. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  14. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  15. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  16. Xinyinqin: a computer-based heart sound simulator.

    Science.gov (United States)

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  17. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  18. Four odontocete species change hearing levels when warned of impending loud sound.

    Science.gov (United States)

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  19. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  20. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  1. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  2. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  3. Effect of thermal-treatment sequence on sound absorbing and mechanical properties of porous sound-absorbing/thermal-insulating composites

    Directory of Open Access Journals (Sweden)

    Huang Chen-Hung

    2016-01-01

    Full Text Available Due to recent rapid commercial and industrial development, mechanical equipment is supplemented massively in the factory and thus mechanical operation causes noise which distresses living at home. In livelihood, neighborhood, transportation equipment, jobsite construction noises impact on quality of life not only factory noise. This study aims to preparation technique and property evaluation of porous sound-absorbing/thermal-insulating composites. Hollow three-dimensional crimp PET fibers blended with low-melting PET fibers were fabricated into hollow PET/low-melting PET nonwoven after opening, blending, carding, lapping and needle-bonding process. Then, hollow PET/low-melting PET nonwovens were laminated into sound-absorbing/thermal-insulating composites by changing sequence of needle-bonding and thermal-treatment. The optimal thermal-treated sequence was found by tensile strength, tearing strength, sound-absorbing coefficient and thermal conductivity coefficient tests of porous composites.

  4. Phonaesthemes and sound symbolism in Swedish brand names

    Directory of Open Access Journals (Sweden)

    Åsa Abelin

    2015-01-01

    Full Text Available This study examines the prevalence of sound symbolism in Swedish brand names. A general principle of brand name design is that effective names should be distinctive, recognizable, easy to pronounce and meaningful. Much money is invested in designing powerful brand names, where the emotional impact of the names on consumers is also relevant and it is important to avoid negative connotations. Customers prefer brand names, which say something about the product, as this reduces product uncertainty (Klink, 2001. Therefore, consumers might prefer sound symbolic names. It has been shown that people associate the sounds of the nonsense words maluma and takete with round and angular shapes, respectively. By extension, more complex shapes and textures might activate words containing certain sounds. This study focuses on semantic dimensions expected to be relevant to product names, such as mobility, consistency, texture and shape. These dimensions are related to the senses of sight, hearing and touch and are also interesting from a cognitive linguistic perspective. Cross-modal assessment and priming experiments with pictures and written words were performed and the results analysed in relation to brand name databases and to sound symbolic sound combinations in Swedish (Abelin, 1999. The results show that brand names virtually never contain pejorative, i.e. depreciatory, consonant clusters, and that certain sounds and sound combinations are overrepresented in certain content categories. Assessment tests show correlations between pictured objects and phoneme combinations in newly created words (non-words. The priming experiment shows that object images prime newly created words as expected, based on the presence of compatible consonant clusters.

  5. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...

  6. A New Built-in Self Test Scheme for Phase-Locked Loops Using Internal Digital Signals

    Science.gov (United States)

    Kim, Youbean; Kim, Kicheol; Kim, Incheol; Kang, Sungho

    Testing PLLs (phase-locked loops) is becoming an important issue that affects both time-to-market and production cost of electronic systems. Though a PLL is the most common mixed-signal building block, it is very difficult to test due to internal analog blocks and signals. In this paper, we propose a new PLL BIST (built-in self test) using the distorted frequency detector that uses only internal digital signals. The proposed BIST does not need to load any analog nodes of the PLL. Therefore, it provides an efficient defect-oriented structural test scheme, reduced area overhead, and improved test quality compared with previous approaches.

  7. Reduction of noise in the neonatal intensive care unit using sound-activated noise meters.

    Science.gov (United States)

    Wang, D; Aubertin, C; Barrowman, N; Moreau, K; Dunn, S; Harrold, J

    2014-11-01

    To determine if sound-activated noise meters providing direct audit and visual feedback can reduce sound levels in a level 3 neonatal intensive care unit (NICU). Sound levels (in dB) were compared between a 2-month period with noise meters present but without visual signal fluctuation and a subsequent 2 months with the noise meters providing direct audit and visual feedback. There was a significant increase in the percentage of time the sound level in the NICU was below 50 dB across all patient care areas (9.9%, 8.9% and 7.3%). This improvement was not observed in the desk area where there are no admitted patients. There was no change in the percentage of time the NICU was below 45 or 55 dB. Sound-activated noise meters seem effective in reducing sound levels in patient care areas. Conversations may have moved to non-patient care areas preventing a similar change there. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  9. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  10. Discrimination of fundamental frequency of synthesized vowel sounds in a noise background

    NARCIS (Netherlands)

    Scheffers, M.T.M.

    1984-01-01

    An experiment was carried out, investigating the relationship between the just noticeable difference of fundamental frequency (jndf0) of three stationary synthesized vowel sounds in noise and the signal-to-noise ratio. To this end the S/N ratios were measured at which listeners could just

  11. Enhancement of acoustical performance of hollow tube sound absorber

    International Nuclear Information System (INIS)

    Putra, Azma; Khair, Fazlin Abd; Nor, Mohd Jailani Mohd

    2016-01-01

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  12. Enhancement of acoustical performance of hollow tube sound absorber

    Energy Technology Data Exchange (ETDEWEB)

    Putra, Azma, E-mail: azma.putra@utem.edu.my; Khair, Fazlin Abd, E-mail: fazlinabdkhair@student.utem.edu.my; Nor, Mohd Jailani Mohd, E-mail: jai@utem.edu.my [Centre for Advanced Research on Energy, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, Durian Tunggal Melaka 76100 Malaysia (Malaysia)

    2016-03-29

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  13. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  14. Timbral aspects of reproduced sound in small rooms. I

    DEFF Research Database (Denmark)

    Bech, Søren

    1995-01-01

    , has been simulated using an electroacoustic setup. The model included the direct sound, 17 individual reflections, and the reverberant field. The threshold of detection and just-noticeable differences for an increase in level were measured for individual reflections using eight subjects for noise......This paper reports some of the influences of individual reflections on the timbre of reproduced sound. A single loudspeaker with frequency-independent directivity characteristics, positioned in a listening room of normal size with frequency-independent absorption coefficients of the room surfaces...... and speech. The results have shown that the first-order floor and ceiling reflections are likely to individually contribute to the timbre of reproduced speech. For a noise signal, additional reflections from the left sidewall will contribute individually. The level of the reverberant field has been found...

  15. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Color improves ‘visual’ acuity via sound

    Directory of Open Access Journals (Sweden)

    Shelly eLevy-Tzedek

    2014-11-01

    Full Text Available Visual-to-auditory sensory substitution devices (SSDs convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location and color information into musical notes. We tested the 'visual' acuity of 23 individuals (13 blind and 10 blindfolded sighted on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of the letter ‘E’. The test was repeated twice: in one test, the letter ‘E’ was drawn with a single color (white, and in the other test, with two colors (red and white. In the latter case, the vertical line in the letter, when upright, was drawn in red, with the three horizontal lines drawn in white. We found no significant differences in performance between the blind and the sighted groups. We found a significant effect of the added color on the ‘visual’ acuity. The highest acuity participants reached in the monochromatic test was 20/800, whereas with the added color, acuity doubled to 20/400. We conclude that color improves 'visual' acuity via sound.

  17. Transmission experiment by the simulated LMFBR model and propagation analysis of acoustic signals

    International Nuclear Information System (INIS)

    Kobayashi, Kenji; Yasuda, Tsutomu; Araki, Hitoshi.

    1981-01-01

    Acoustic transducers to detect a boiling of sodium may be installed in the upper structure and at the upper position of reactor vessel wall under constricted conditions. A set of the experiments of transmission of acoustic vibration to various points of the vessel was performed utilizing the half scale-hydraulic flow test facility simulating reactor vessel over the frequency range 20 kHz -- 100 kHz. Acoustic signals from an installed sound source in the core were measured at each point by both hydrophones in the vessel and vibration pickups on the vessel wall. In these experiments transmission of signals to each point of detectors were clearly observed to background noise level. These data have been summarized in terms of the transmission loss and furthermore are compared with background noise level of flow to estimate the feasibility of detection of sodium boiling sound. The ratio of signal to noise was obtained to be about 13 dB by hydrophone in the upper structure, 8 dB by accelerometer and 16 dB by AE-sensor at the upper position on the vessel in experiments used the simulation model. Sound waves emanated due to sodium boiling also propagate along the wall of the vessel may be predicted theoretically. The result of analysis suggests a capability of detection at the upper position of the reactor vessel wall. Leaky Lamb waves of the first symmetric (L 1 ) and of the antisymmetric (F 1 ) mode and shear horizontal wave (SH) have been derived in light of the attenuation due to coupling to liquid sodium as the traveling modes over the frequency range 10 kHz -- 100 kHz up to 50 mm in thickness of the vessel wall. Leaky Lamb wave (L 1 ) and (SH) mode have been proposed theoretically on the some assumption to be most available to detect the boiling sound of sodium propagating along the vessel wall. (author)

  18. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  19. Flight Performance Evaluation of Three GPS Receivers for Sounding Rocket Tracking

    Science.gov (United States)

    Bull, Barton; Diehl, James; Montenbruck, Oliver; Markgraf, Markus; Bauer, Frank (Technical Monitor)

    2002-01-01

    In preparation for the European Space Agency Maxus-4 mission, a sounding rocket test flight was carried out at Esrange, near Kiruna, Sweden on February 19, 2001 to validate existing ground facilities and range safety installations. Due to the absence of a dedicated scientific payload, the flight offered the opportunity to test multiple GPS receivers and assess their performance for the tracking of sounding rockets. The receivers included an Ashtech G12 HDMA receiver, a BAE (Canadian Marconi) Allstar receiver and a Mitel Orion receiver. All of them provide C/A code tracking on the L1 frequency to determine the user position and make use of Doppler measurements to derive the instantaneous velocity. Among the receivers, the G12 has been optimized for use under highly dynamic conditions and has earlier been flown successfully on NASA sounding rockets. The Allstar is representative of common single frequency receivers for terrestrial applications and received no particular modification, except for the disabling of the common altitude and velocity constraints that would otherwise inhibit its use for space application. The Orion receiver, finally, employs the same Mitel chipset as the Allstar, but has received various firmware modifications by DLR to safeguard it against signal losses and improve its tracking performance. While the two NASA receivers were driven by a common wrap-around antenna, the DLR experiment made use of a switchable antenna system comprising a helical antenna in the tip of the rocket and two blade antennas attached to the body of the vehicle. During the boost a peak acceleration of roughly l7g's was achieved which resulted in a velocity of about 1100 m/s at the end of the burn. At apogee, the rocket reached an altitude of over 80 km. A detailed analysis of the attained flight data is given together with a evaluation of different receiver designs and antenna concepts.

  20. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Reports: Final Comprehensive Performance Test Report, P/N: 1356006-1, S.N: 202/A2

    Science.gov (United States)

    Platt, R.

    1998-01-01

    This is the Performance Verification Report. the process specification establishes the requirements for the comprehensive performance test (CPT) and limited performance test (LPT) of the earth observing system advanced microwave sounding unit-A2 (EOS/AMSU-A2), referred to as the unit. The unit is defined on drawing 1356006.

  1. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  2. Application of Carbon Nanotube Assemblies for Sound Generation and Heat Dissipation

    Science.gov (United States)

    Kozlov, Mikhail; Haines, Carter; Oh, Jiyoung; Lima, Marcio; Fang, Shaoli

    2011-03-01

    Nanotech approaches were explored for the efficient transformation of an electrical signal into sound, heat, cooling action, and mechanical strain. The studies are based on the aligned arrays of multi-walled carbon nanotubes (MWNT forests) that can be grown on various substrates using a conventional CVD technique. They form a three-dimensional conductive network that possesses uncommon electrical, thermal, acoustic and mechanical properties. When heated with an alternating current or a near-IR laser modulated in 0.01--20 kHz range, the nanotube forests produce loud, audible sound. High generated sound pressure and broad frequency response (beyond 20 kHz) show that the forests act as efficient thermo-acoustic (TA) transducers. They can generate intense third and fourth TA harmonics that reveal peculiar interference-like patterns from ac-dc voltage scans. A strong dependence of the patterns on forest height can be used for characterization of carbon nanotube assemblies and for evaluation of properties of thermal interfaces. Because of good coupling with surrounding air, the forests provide excellent dissipation of heat produced by IC chips. Thermoacoustic converters based on forests can be used for thermo- and photo-acoustic sound generation, amplification and noise cancellation.

  3. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  4. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  5. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  6. Sub-Audible Speech Recognition Based upon Electromyographic Signals

    Science.gov (United States)

    Jorgensen, Charles C. (Inventor); Lee, Diana D. (Inventor); Agabon, Shane T. (Inventor)

    2012-01-01

    Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.

  7. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  8. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  9. Signal processing for non-destructive testing of railway tracks

    Science.gov (United States)

    Heckel, Thomas; Casperson, Ralf; Rühe, Sven; Mook, Gerhard

    2018-04-01

    Increased speed, heavier loads, altered material and modern drive systems result in an increasing number of rail flaws. The appearance of these flaws also changes continually due to the rapid change in damage mechanisms of modern rolling stock. Hence, interpretation has become difficult when evaluating non-destructive rail testing results. Due to the changed interplay between detection methods and flaws, the recorded signals may result in unclassified types of rail flaws. Methods for automatic rail inspection (according to defect detection and classification) undergo continual development. Signal processing is a key technology to master the challenge of classification and maintain resolution and detection quality, independent of operation speed. The basic ideas of signal processing, based on the Glassy-Rail-Diagram for classification purposes, are presented herein. Examples for the detection of damages caused by rolling contact fatigue also are given, and synergetic effects of combined evaluation of diverse inspection methods are shown.

  10. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  11. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  12. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  13. Generation and control of sound bullets with a nonlinear acoustic lens.

    Science.gov (United States)

    Spadoni, Alessandro; Daraio, Chiara

    2010-04-20

    Acoustic lenses are employed in a variety of applications, from biomedical imaging and surgery to defense systems and damage detection in materials. Focused acoustic signals, for example, enable ultrasonic transducers to image the interior of the human body. Currently however the performance of acoustic devices is limited by their linear operational envelope, which implies relatively inaccurate focusing and low focal power. Here we show a dramatic focusing effect and the generation of compact acoustic pulses (sound bullets) in solid and fluid media, with energies orders of magnitude greater than previously achievable. This focusing is made possible by a tunable, nonlinear acoustic lens, which consists of ordered arrays of granular chains. The amplitude, size, and location of the sound bullets can be controlled by varying the static precompression of the chains. Theory and numerical simulations demonstrate the focusing effect, and photoelasticity experiments corroborate it. Our nonlinear lens permits a qualitatively new way of generating high-energy acoustic pulses, which may improve imaging capabilities through increased accuracy and signal-to-noise ratios and may lead to more effective nonintrusive scalpels, for example, for cancer treatment.

  14. Effect of Temperature on Ultrasonic Signal Propagation for Extra Virgin Olive Oil Adulteration

    Science.gov (United States)

    Alias, N. A.; Hamid, S. B. Abdul; Sophian, A.

    2017-11-01

    Fraud cases involving adulteration of extra virgin olive oil has become significant nowadays due to increasing in cost of supply and highlight given the benefit of extra virgin olive oil for human consumption. This paper presents the effects of temperature variation on spectral formed utilising pulse-echo technique of ultrasound signal. Several methods had been introduced to characterize the adulteration of extra virgin olive oil with other fluid sample such as mass chromatography, standard method by ASTM (density test, distillation test and evaporation test) and mass spectrometer. Pulse-echo method of ultrasound being a non-destructive method to be used to analyse the sound wave signal captured by oscilloscope. In this paper, a non-destructive technique utilizing ultrasound to characterize extra virgin olive oil adulteration level will be presented. It can be observed that frequency spectrum of sample with different ratio and variation temperature shows significant percentages different from 30% up to 70% according to temperature variation thus possible to be used for sample characterization.

  15. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  16. Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction.

    Science.gov (United States)

    Ricketts, Todd A; Hornsby, Benjamin W Y

    2005-05-01

    This brief report discusses the affect of digital noise reduction (DNR) processing on aided speech recognition and sound quality measures in 14 adults fitted with a commercial hearing aid. Measures of speech recognition and sound quality were obtained in two different speech-in-noise conditions (71 dBA speech, +6 dB SNR and 75 dBA speech, +1 dB SNR). The results revealed that the presence or absence of DNR processing did not impact speech recognition in noise (either positively or negatively). Paired comparisons of sound quality for the same speech in noise signals, however, revealed a strong preference for DNR processing. These data suggest that at least one implementation of DNR processing is capable of providing improved sound quality, for speech in noise, in the absence of improved speech recognition.

  17. The development of infants' use of property-poor sounds to individuate objects.

    Science.gov (United States)

    Wilcox, Teresa; Smith, Tracy R

    2010-12-01

    There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox, Woods, Tuggy, & Napoli, 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Photoacoustic signal and noise analysis for Si thin plate: signal correction in frequency domain.

    Science.gov (United States)

    Markushev, D D; Rabasović, M D; Todorović, D M; Galović, S; Bialkowski, S E

    2015-03-01

    Methods for photoacoustic signal measurement, rectification, and analysis for 85 μm thin Si samples in the 20-20 000 Hz modulation frequency range are presented. Methods for frequency-dependent amplitude and phase signal rectification in the presence of coherent and incoherent noise as well as distortion due to microphone characteristics are presented. Signal correction is accomplished using inverse system response functions deduced by comparing real to ideal signals for a sample with well-known bulk parameters and dimensions. The system response is a piece-wise construction, each component being due to a particular effect of the measurement system. Heat transfer and elastic effects are modeled using standard Rosencweig-Gersho and elastic-bending theories. Thermal diffusion, thermoelastic, and plasmaelastic signal components are calculated and compared to measurements. The differences between theory and experiment are used to detect and correct signal distortion and to determine detector and sound-card characteristics. Corrected signal analysis is found to faithfully reflect known sample parameters.

  19. A comparative study of the SVM and K-nn machine learning algorithms for the diagnosis of respiratory pathologies using pulmonary acoustic signals.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian

    2014-06-27

    Pulmonary acoustic parameters extracted from recorded respiratory sounds provide valuable information for the detection of respiratory pathologies. The automated analysis of pulmonary acoustic signals can serve as a differential diagnosis tool for medical professionals, a learning tool for medical students, and a self-management tool for patients. In this context, we intend to evaluate and compare the performance of the support vector machine (SVM) and K-nearest neighbour (K-nn) classifiers in diagnosis respiratory pathologies using respiratory sounds from R.A.L.E database. The pulmonary acoustic signals used in this study were obtained from the R.A.L.E lung sound database. The pulmonary acoustic signals were manually categorised into three different groups, namely normal, airway obstruction pathology, and parenchymal pathology. The mel-frequency cepstral coefficient (MFCC) features were extracted from the pre-processed pulmonary acoustic signals. The MFCC features were analysed by one-way ANOVA and then fed separately into the SVM and K-nn classifiers. The performances of the classifiers were analysed using the confusion matrix technique. The statistical analysis of the MFCC features using one-way ANOVA showed that the extracted MFCC features are significantly different (p < 0.001). The classification accuracies of the SVM and K-nn classifiers were found to be 92.19% and 98.26%, respectively. Although the data used to train and test the classifiers are limited, the classification accuracies found are satisfactory. The K-nn classifier was better than the SVM classifier for the discrimination of pulmonary acoustic signals from pathological and normal subjects obtained from the RALE database.

  20. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  1. A comparison between swallowing sounds and vibrations in patients with dysphagia

    Science.gov (United States)

    Movahedi, Faezeh; Kurosu, Atsuko; Coyle, James L.; Perera, Subashan

    2017-01-01

    The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. The aim of this study is to present a broad comparison of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who underwent videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5ml spoon was given to the participants. A comprehensive set of features was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies. PMID:28495001

  2. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  3. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A further test of relevance of ASEL and CSEL in the determination of the rating sound level for shooting sounds

    NARCIS (Netherlands)

    Vos, J.

    1998-01-01

    In a previous study on the annoyance caused by shooting sounds [Proceedings Internoise '96, Vol. 5, 2231-2236], it was shown that an almost perfect prediction of the annoyance, as rated indoors with the windows closed, was obtained on the basis of the weighted sum of the outdoor A-weighted and

  5. Sound, music and gender in mobile games

    DEFF Research Database (Denmark)

    Machin, David; Van Leeuwen, T.

    2016-01-01

    resource, they can communicate very specific meanings and carry ideologies. In this paper, using multimodal critical discourse analysis, we analyse the sounds and music in two proto-games that are played on mobile devices: Genie Palace Divine and Dragon Island Race. While visually the two games are highly...... and impersonal and specific kinds of social relations which, we show, is highly gendered. It can also signal priorities, ideas and values, which in both cases, we show, relate to a world where there is simply no time to stop and think. © 2016, equinox publishing....

  6. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  7. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  8. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  9. Effects of stratification and fluctuations on sound propagation in the deep ocean

    International Nuclear Information System (INIS)

    March, R.H.

    1979-01-01

    It is noted that even in a homogeneous ocean, the effects of non-thermal noise and sound absorption limit the maximum effective range of detection of acoustic signals from particle cascades to distances of 2 to 10 kilometers, depending on the surface conditions prevailing and the directional characteristics of the detector. In the present paper, the effects of stratification and fluctuations in the sound velocity profile in the deep ocean over distances of this order are examined. Attention is given to two effects of potential significance, refraction and scintillation. It is found that neither effect has any significant consequences at ranges of less than 10 km

  10. Identification of Bearing Failure Using Signal Vibrations

    Science.gov (United States)

    Yani, Irsyadi; Resti, Yulia; Burlian, Firmansyah

    2018-04-01

    Vibration analysis can be used to identify damage to mechanical systems such as journal bearings. Identification of failure can be done by observing the resulting vibration spectrum by measuring the vibration signal occurring in a mechanical system Bearing is one of the engine elements commonly used in mechanical systems. The main purpose of this research is to monitor the bearing condition and to identify bearing failure on a mechanical system by observing the resulting vibration. Data collection techniques based on recordings of sound caused by the vibration of the mechanical system were used in this study, then created a database system based bearing failure due to vibration signal recording sounds on a mechanical system The next step is to group the bearing damage by type based on the databases obtained. The results show the percentage of success in identifying bearing damage is 98 %.

  11. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  12. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  13. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  14. Neural Network Based Recognition of Signal Patterns in Application to Automatic Testing of Rails

    Directory of Open Access Journals (Sweden)

    Tomasz Ciszewski

    2006-01-01

    Full Text Available The paper describes the application of neural network for recognition of signal patterns in measuring data gathered by the railroad ultrasound testing car. Digital conversion of the measuring signal allows to store and process large quantities of data. The elaboration of smart, effective and automatic procedures recognizing the obtained patterns on the basisof measured signal amplitude has been presented. The test shows only two classes of pattern recognition. In authors’ opinion if we deliver big enough quantity of training data, presented method is applicable to a system that recognizes many classes.

  15. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  16. Wavelet analysis to decompose a vibration simulation signal to improve pre-distribution testing of packaging

    Science.gov (United States)

    Griffiths, K. R.; Hicks, B. J.; Keogh, P. S.; Shires, D.

    2016-08-01

    In general, vehicle vibration is non-stationary and has a non-Gaussian probability distribution; yet existing testing methods for packaging design employ Gaussian distributions to represent vibration induced by road profiles. This frequently results in over-testing and/or over-design of the packaging to meet a specification and correspondingly leads to wasteful packaging and product waste, which represent 15bn per year in the USA and €3bn per year in the EU. The purpose of the paper is to enable a measured non-stationary acceleration signal to be replaced by a constructed signal that includes as far as possible any non-stationary characteristics from the original signal. The constructed signal consists of a concatenation of decomposed shorter duration signals, each having its own kurtosis level. Wavelet analysis is used for the decomposition process into inner and outlier signal components. The constructed signal has a similar PSD to the original signal, without incurring excessive acceleration levels. This allows an improved and more representative simulated input signal to be generated that can be used on the current generation of shaker tables. The wavelet decomposition method is also demonstrated experimentally through two correlation studies. It is shown that significant improvements over current international standards for packaging testing are achievable; hence the potential for more efficient packaging system design is possible.

  17. Signal quality enhancement using higher order wavelets for ultrasonic TOFD signals from austenitic stainless steel welds.

    Science.gov (United States)

    Praveen, Angam; Vijayarekha, K; Abraham, Saju T; Venkatraman, B

    2013-09-01

    Time of flight diffraction (TOFD) technique is a well-developed ultrasonic non-destructive testing (NDT) method and has been applied successfully for accurate sizing of defects in metallic materials. This technique was developed in early 1970s as a means for accurate sizing and positioning of cracks in nuclear components became very popular in the late 1990s and is today being widely used in various industries for weld inspection. One of the main advantages of TOFD is that, apart from fast technique, it provides higher probability of detection for linear defects. Since TOFD is based on diffraction of sound waves from the extremities of the defect compared to reflection from planar faces as in pulse echo and phased array, the resultant signal would be quite weak and signal to noise ratio (SNR) low. In many cases the defect signal is submerged in this noise making it difficult for detection, positioning and sizing. Several signal processing methods such as digital filtering, Split Spectrum Processing (SSP), Hilbert Transform and Correlation techniques have been developed in order to suppress unwanted noise and enhance the quality of the defect signal which can thus be used for characterization of defects and the material. Wavelet Transform based thresholding techniques have been applied largely for de-noising of ultrasonic signals. However in this paper, higher order wavelets are used for analyzing the de-noising performance for TOFD signals obtained from Austenitic Stainless Steel welds. It is observed that higher order wavelets give greater SNR improvement compared to the lower order wavelets. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Suppression of grasshopper sound production by nitric oxide-releasing neurons of the central complex

    Science.gov (United States)

    Weinrich, Anja; Kunst, Michael; Wirmer, Andrea; Holstein, Gay R.

    2008-01-01

    The central complex of acridid grasshoppers integrates sensory information pertinent to reproduction-related acoustic communication. Activation of nitric oxide (NO)/cyclic GMP-signaling by injection of NO donors into the central complex of restrained Chorthippus biguttulus females suppresses muscarine-stimulated sound production. In contrast, sound production is released by aminoguanidine (AG)-mediated inhibition of nitric oxide synthase (NOS) in the central body, suggesting a basal release of NO that suppresses singing in this situation. Using anti-citrulline immunocytochemistry to detect recent NO production, subtypes of columnar neurons with somata located in the pars intercerebralis and tangential neurons with somata in the ventro-median protocerebrum were distinctly labeled. Their arborizations in the central body upper division overlap with expression patterns for NOS and with the site of injection where NO donors suppress sound production. Systemic application of AG increases the responsiveness of unrestrained females to male calling songs. Identical treatment with the NOS inhibitor that increased male song-stimulated sound production in females induced a marked reduction of citrulline accumulation in central complex columnar and tangential neurons. We conclude that behavioral situations that are unfavorable for sound production (like being restrained) activate NOS-expressing central body neurons to release NO and elevate the behavioral threshold for sound production in female grasshoppers. PMID:18574586

  19. Testing the potential of an elevated temperature IRSL signal from K-feldspar

    DEFF Research Database (Denmark)

    Buylaert, Jan-Pieter; Murray, A.S.; Thomsen, Kristina Jørkov

    2009-01-01

    on laboratory tests (recycling ratio, recuperation, dose recovery) we show that our SAR protocol is suitable for these samples. The observed post-IR IR fading rates (mean g2days = 1.62 ± 0.06%/decade, n = 24; assuming logarithmic fading) are significantly lower than those measured at 50 °C (mean g2days = 3...... the conventional IRSL signal stimulated at 50 °C and detected in the blue–violet region of the spectrum. One of these was the post-IR IR signal in which first an IR bleach is carried out at a low temperature (e.g. 100 s at 50 °C) and a remaining IRSL signal is measured at an elevated temperature (100 s at 225 °C......; detection in the blue–violet region). It is the latter signal that is of interest in this paper. We test such a post-IR IR dating protocol on K-feldspar extracts from a variety of locations and depositional environments and compare the results with those from the conventional IR at 50 °C protocol. Based...

  20. Sound field separation with cross measurement surfaces.

    Directory of Open Access Journals (Sweden)

    Jin Mao

    Full Text Available With conventional near-field acoustical holography, it is impossible to identify sound pressure when the coherent sound sources are located on the same side of the array. This paper proposes a solution, using cross measurement surfaces to separate the sources based on the equivalent source method. Each equivalent source surface is built in the center of the corresponding original source with a spherical surface. According to the different transfer matrices between equivalent sources and points on holographic surfaces, the weighting of each equivalent source from coherent sources can be obtained. Numerical and experimental studies have been performed to test the method. For the sound pressure including noise after separation in the experiment, the calculation accuracy can be improved by reconstructing the pressure with Tikhonov regularization and the L-curve method. On the whole, a single source can be effectively separated from coherent sources using cross measurement.

  1. Measurement of acoustic emission signal energy. Calibration and tests

    International Nuclear Information System (INIS)

    Chretien, N.; Bernard, P.; Fayolle, J.

    1975-01-01

    The possibility of using an Audimat W device for analyzing the electric energy of signals delivered by a piezo-electric sensor for acoustic emission was investigated. The characteristics of the prototype device could be improved. The tests performed revealed that the 7075-T651 aluminium alloy can be used as a reference material [fr

  2. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Report: Final Comprehensive Performance Test Report, P/N 1331720-2TST, S/N 105/A1

    Science.gov (United States)

    Platt, R.

    1999-01-01

    This is the Performance Verification Report, Final Comprehensive Performance Test (CPT) Report, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). This specification establishes the requirements for the CPT and Limited Performance Test (LPT) of the AMSU-1A, referred to here in as the unit. The sequence in which the several phases of this test procedure shall take place is shown.

  3. Sound absorption effects in a rectangular enclosure with the foamed aluminum sheet absorber

    International Nuclear Information System (INIS)

    Oh, Jae Eung; Chung, Jin Tai; Kim, Sang Hun; Chung, Kyung Ryul

    1998-01-01

    For the purpose of finding out the optimal thickness of sound absorber and the sound absorption effects due to the selected thickness at an interested frequency range, the analytical study identifies the interior and exterior sound field characteristics of a rectangular enclosure with foamed aluminum lining and the experimental verification is performed with random noise input. By using a two-microphone impedance tube, we measure experimentally the absorption coefficient and the impedance of simple sound absorbing materials. Measured acoustical parameters of the test samples are applied to the theoretical analysis to predict sound pressure field in the cavity. The sound absorption effects from measurements are compared to predicted ones in both cases with and without foamed aluminum lining in the cavity of the rectangular enclosure

  4. Wavelet modeling of signals for non-destructive testing of concretes

    International Nuclear Information System (INIS)

    Shao, Zhixue; Shi, Lihua; Cai, Jian

    2011-01-01

    In a non-destructive test of concrete structures, ultrasonic pulses are commonly used to detect damage or embedded objects from their reflections. A wavelet modeling method is proposed here to identify the main reflections and to remove the interferences in the detected ultrasonic waves. This method assumes that if the structure is stimulated by a wavelet function with good time–frequency localization ability, the detected signal is a combination of time-delayed and amplitude-attenuated wavelets. Therefore, modeling of the detected signal by wavelets can give a straightforward and simple model of the original signal. The central time and amplitude of each wavelet represent the position and amplitude of the reflections in the detected structure. A signal processing method is also proposed to estimate the structure response to wavelet excitation from its response to a high-voltage pulse with a sharp leading edge. A signal generation card with a compact peripheral component interconnect extension for instrumentation interface is designed to produce this high-voltage pulse. The proposed method is applied to synthesized aperture focusing technology of concrete specimens and the image results are provided

  5. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  6. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  7. Signals, processes, and systems an interactive multimedia introduction to signal processing

    CERN Document Server

    Karrenberg, Ulrich

    2013-01-01

    This is a very new concept for learning Signal Processing, not only from the physically-based scientific fundamentals, but also from the didactic perspective, based on modern results of brain research. The textbook together with the DVD form a learning system that provides investigative studies and enables the reader to interactively visualize even complex processes. The unique didactic concept is built on visualizing signals and processes on the one hand, and on graphical programming of signal processing systems on the other. The concept has been designed especially for microelectronics, computer technology and communication. The book allows to develop, modify, and optimize useful applications using DasyLab - a professional and globally supported software for metrology and control engineering. With the 3rd edition, the software is also suitable for 64 bit systems running on Windows 7. Real signals can be acquired, processed and played on the sound card of your computer. The book provides more than 200 pre-pr...

  8. Heat of combustion, sound speed and component fluctuations in natural gas

    International Nuclear Information System (INIS)

    Burstein, L.; Ingman, D.

    1998-01-01

    The heat of combustion and sound speed of natural gas were studied as a function of random fluctuation of the gas fractions. A method of sound speed determination was developed and used for over 50,000 possible variants of component concentrations in four- and five- component mixtures. A test on binary (methane-ethane) and multicomponent (Gulf Coast) gas mixtures under standard pressure and moderate temperatures shows satisfactory predictability of sound speed on the basis of the binary virial coefficients, sound speeds and heat capacities of the pure components. Uncertainty in the obtained values does not exceed that of the pure component data. The results of comparison between two natural gas mixtures - with and without nonflammable components - are reported

  9. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  10. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  11. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Sound absorption and morphology characteristic of porous concrete paving blocks

    Science.gov (United States)

    Halim, N. H. Abd; Nor, H. Md; Ramadhansyah, P. J.; Mohamed, A.; Hassan, N. Abdul; Ibrahim, M. H. Wan; Ramli, N. I.; Nazri, F. Mohamed

    2017-11-01

    In this study, sound absorption and morphology characteristic of Porous Concrete Paving Blocks (PCPB) at different sizes of coarse aggregate were presented. Three different sizes of coarse aggregate were used; passing 10 mm retained 5 mm (as Control), passing 8 mm retained 5 mm (8 - 5) and passing 10 mm retained 8 mm (10 - 8). The sound absorption test was conducted through the impedance tube at different frequency. It was found that the size of coarse aggregate affects the level of absorption of the specimens. It also shows that PCPB 10 - 8 resulted in high sound absorption compared to the other blocks. On the other hand, microstructure morphology of PCPB shows a clearer version of existing micro-cracks and voids inside the specimens which affecting the results of sound absorption.

  14. Gefinex 400S (Sampo) EM-soundings at Olkiluoto 2008

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2008-09-01

    In the beginning of June 2008 Geological Survey of Finland (GTK) carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The same soundings sites were first time measured and marked in 2004 and has been repeated after it yearly in the same season. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. Because of the strong electromagnetic noise all planned sites (48) could not be measured. In 2008 the measurements were performed at the sites that were successful in 2007 (43 soundings). The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the signal/noise also with long coil separations and the repeatability of the results is reasonably good. However, most suitable for monitoring purposes are the sites without strong surficial 3D effects. Comparison of the results of 2004 to 2008 surveys shows differences on some ARD (Apparent resistivity-depth) curves. Those are mainly results of the modified man-made structures. The effects of changes in groundwater conditions are obviously slight. (orig.)

  15. Sound from charged particles in liquids

    International Nuclear Information System (INIS)

    Askar'yan, G.A.

    1980-01-01

    Two directions of sound application appearing during the charged particles passing through liquid - in biology and for charged particles registration are considered. Application of this sound in radiology is determined by a contribution of its hypersound component (approximately 10 9 Hz) to radiology effect of ionizing radiation on micro-organisms and cells. Large amplitudes and pressure gradients in a hypersound wave have a pronounced destructive breaking effect on various microobjects (cells, bacteria, viruses). An essential peculiarity of these processes is the possibility of control by choosing conditions changing hypersound generation, propagation and effect. This fact may lead not only to the control by radiaiton effects but also may explain and complete the analogy of ionizing radiation and ultrasound effect on bioobjects. The second direction is acoustic registration of passing ionizing particles. It is based on the possibility of guaranteed signal reception from a shower with 10 15 -10 16 eV energy in water at distances of hundreds of meters. Usage of acoustic technique for neutrino registration in the DUMAND project permits to use a detecting volume of water with a mass of 10 9 t and higher

  16. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    Science.gov (United States)

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  17. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  18. Experimental investigation of sound absorption of acoustic wedges for anechoic chambers

    Science.gov (United States)

    Belyaev, I. V.; Golubev, A. Yu.; Zverev, A. Ya.; Makashov, S. Yu.; Palchikovskiy, V. V.; Sobolev, A. F.; Chernykh, V. V.

    2015-09-01

    The results of measuring the sound absorption by acoustic wedges, which were performed in AC-3 and AC-11 reverberation chambers at the Central Aerohydrodynamic Institute (TsAGI), are presented. Wedges of different densities manufactured from superfine basaltic and thin mineral fibers were investigated. The results of tests of these wedges were compared to the sound absorption of wedges of the operating AC-2 anechoic facility at TsAGI. It is shown that basaltic-fiber wedges have better sound-absorption characteristics than the investigated analogs and can be recommended for facing anechoic facilities under construction.

  19. An integrative time-varying frequency detection and channel sounding method for dynamic plasma sheath

    Science.gov (United States)

    Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming

    2018-01-01

    The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.

  20. Reproduction-Related Sound Production of Grasshoppers Regulated by Internal State and Actual Sensory Environment

    Science.gov (United States)

    Heinrich, Ralf; Kunst, Michael; Wirmer, Andrea

    2012-01-01

    The interplay of neural and hormonal mechanisms activated by entero- and extero-receptors biases the selection of actions by decision making neuronal circuits. The reproductive behavior of acoustically communicating grasshoppers, which is regulated by short-term neural and longer-term hormonal mechanisms, has frequently been used to study the cellular and physiological processes that select particular actions from the species-specific repertoire of behaviors. Various grasshoppers communicate with species- and situation-specific songs in order to attract and court mating partners, to signal reproductive readiness, or to fend off competitors. Selection and coordination of type, intensity, and timing of sound signals is mediated by the central complex, a highly structured brain neuropil known to integrate multimodal pre-processed sensory information by a large number of chemical messengers. In addition, reproductive activity including sound production critically depends on maturation, previous mating experience, and oviposition cycles. In this regard, juvenile hormone released from the corpora allata has been identified as a decisive hormonal signal necessary to establish reproductive motivation in grasshopper females. Both regulatory systems, the central complex mediating short-term regulation and the corpora allata mediating longer-term regulation of reproduction-related sound production mutually influence each other’s activity in order to generate a coherent state of excitation that promotes or suppresses reproductive behavior in respective appropriate or inappropriate situations. This review summarizes our current knowledge about extrinsic and intrinsic factors that influence grasshopper reproductive motivation, their representation in the nervous system and their integrative processing that mediates the initiation or suppression of reproductive behaviors. PMID:22737107

  1. Reproduction-related sound production of grasshoppers regulated by internal state and actual sensory environment

    Directory of Open Access Journals (Sweden)

    Ralf eHeinrich

    2012-06-01

    Full Text Available The interplay of neural and hormonal mechanisms activated by entero- and exteroreceptors biases the selection of actions by decision making neuronal circuits. The reproductive behaviour of acoustically communicating grasshoppers, which is regulated by short-term neural and longer-term hormonal mechanisms, has frequently been used to study the cellular and physiological processes that select particular actions from the species-specific repertoire of behaviours. Various grasshoppers communicate with species- and situation-specific songs in order to attract and court mating partners, to signal reproductive readiness or to fend off competitors. Selection and coordination of type, intensity and timing of sound signals is mediated by the central complex, a highly structured brain neuropil known to integrate multimodal pre-processed sensory information by a large number of chemical messengers. In addition, reproductive activity including sound production critically depends on maturation, previous mating experience and oviposition cycles. In this regard, juvenile hormone released from the corpora allata has been identified as a decisive hormonal signal necessary to establish reproductive motivation in grasshopper females. Both regulatory systems, the central complex mediating short-term regulation and the corpora allata mediating longer-term regulation of reproduction related sound production mutually influence each other’s activity in order to generate a coherent state of excitation that promotes or suppresses reproductive behaviour in respective appropriate or inappropriate situations.This review summarizes our current knowledge about extrinsic and intrinsic factors that influence grasshopper reproductive motivation, their representation in the nervous system and their integrative processing that mediates the initiation or suppression of reproductive behaviors.

  2. Acoustic-Seismic Coupling of Broadband Signals - Analysis of Potential Disturbances during CTBT On-Site Inspection Measurements

    Science.gov (United States)

    Liebsch, Mattes; Altmann, Jürgen

    2015-04-01

    For the verification of the Comprehensive Nuclear Test Ban Treaty (CTBT) the precise localisation of possible underground nuclear explosion sites is important. During an on-site inspection (OSI) sensitive seismic measurements of aftershocks can be performed, which, however, can be disturbed by other signals. To improve the quality and effectiveness of these measurements it is essential to understand those disturbances so that they can be reduced or prevented. In our work we focus on disturbing signals caused by airborne sources: When the sound of aircraft (as often used by the inspectors themselves) hits the ground, it propagates through pores in the soil. Its energy is transferred to the ground and soil vibrations are created which can mask weak aftershock signals. The understanding of the coupling of acoustic waves to the ground is still incomplete. However, it is necessary to improve the performance of an OSI, e.g. to address potential consequences for the sensor placement, the helicopter trajectories etc. We present our recent advances in this field. We performed several measurements to record sound pressure and soil velocity produced by various sources, e.g. broadband excitation by jet aircraft passing overhead and signals artificially produced by a speaker. For our experimental set-up microphones were placed close to the ground and geophones were buried in different depths in the soil. Several sensors were shielded from the directly incident acoustic signals by a box coated with acoustic damping material. While sound pressure under the box was strongly reduced, the soil velocity measured under the box was just slightly smaller than outside of it. Thus these soil vibrations were mostly created outside the box and travelled through the soil to the sensors. This information is used to estimate characteristic propagation lengths of the acoustically induced signals in the soil. In the seismic data we observed interference patterns which are likely caused by the

  3. Dynamic Testing of Signal Transduction Deregulation During Breast Cancer Initiation

    Science.gov (United States)

    2012-07-01

    Std. Z39.18 Victoria Seewaldt, M.D. Dynamic Testing of Signal Transduction Deregulation During Breast Cancer Initiation Duke University Durham...attomole- zeptomole range. Internal dilution curves insure a high-dynamic calibration range. DU -26 8L DU -26 6L DU -29 5R DU -22 9.2 L DU...3: Nanobiosensor technology is translated to test for pathway deregulation in RPFNA cytology obtained from 10 high-risk women with cytological

  4. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  5. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  6. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  7. What ears do for bats: a comparative study of pinna sound pressure transformation in chiroptera.

    Science.gov (United States)

    Obrist, M K; Fenton, M B; Eger, J L; Schlegel, P A

    1993-07-01

    Using a moveable loudspeaker and an implanted microphone, we studied the sound pressure transformation of the external ears of 47 species of bats from 13 families. We compared pinna gain, directionality of hearing and interaural intensity differences (IID) in echolocating and non-echolocating bats, in species using different echolocation strategies and in species that depend upon prey-generated sounds to locate their targets. In the Pteropodidae, two echolocating species had slightly higher directionality than a non-echolocating species. The ears of phyllostomid and vespertilionid species showed moderate directionality. In the Mormoopidae, the ear directionality of Pteronotus parnellii clearly matched the dominant spectral component of its echolocation calls, unlike the situation in three other species. Species in the Emballonuridae, Molossidae, Rhinopomatidae and two vespertilionids that use narrow-band search-phase echolocation calls showed increasingly sharp tuning of the pinna to the main frequency of their signals. Similar tuning was most evident in Hipposideridae and Rhinolophidae, species specialized for flutter detection via Doppler-shifted echoes of high-duty-cycle narrow-band signals. The large pinnae of bats that use prey-generated sounds to find their targets supply high sound pressure gain at lower frequencies. Increasing domination of a narrow spectral band in echolocation is reflected in the passive acoustic properties of the external ears (sharper directionality). The importance of IIDs for lateralization and horizontal localization is discussed by comparing the behavioural directional performance of bats with their bioacoustical features.

  8. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  9. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  10. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  11. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  12. Producing of Impedance Tube for Measurement of Acoustic Absorption Coefficient of Some Sound Absorber Materials

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi

    2008-04-01

    Full Text Available Introduction & Objective: Noise is one of the most important harmful agents in work environment. In spit of industrial improvements, exposure with over permissible limit of noise is counted as one of the health complication of workers. In Iran, do not exact information of the absorption coefficient of acoustic materials. Iranian manufacturer have not laboratory for measured of sound absorbance of their products, therefore using of sound absorber is limited for noise control in industrial and non industrial constructions. The goal of this study was to design an impedance tube based on pressure method for measurement of the sound absorption coefficient of acoustic materials.Materials & Methods: In this study designing of measuring system and method of calculation of sound absorption based on a available equipment and relatively easy for measurement of the sound absorption coefficient related to ISO10534-1 was performed. Measuring system consist of heavy asbestos tube, a pure tone sound generator, calibrated sound level meter for measuring of some commonly of sound absorber materials was used. Results: In this study sound absorption coefficient of 23 types of available acoustic material in Iran was tested. Reliability of results by three repeat of measurement was tested. Results showed that the standard deviation of sound absorption coefficient of study materials was smaller than .Conclusion: The present study performed a necessary technology of designing and producing of impedance tube for determining of acoustical materials absorption coefficient in Iran.

  13. Sound propagation in elongated superfluid fermionic clouds

    International Nuclear Information System (INIS)

    Capuzzi, P.; Vignolo, P.; Federici, F.; Tosi, M. P.

    2006-01-01

    We use hydrodynamic equations to study sound propagation in a superfluid Fermi gas at zero temperature inside a strongly elongated cigar-shaped trap, with main attention to the transition from the BCS to the unitary regime. First, we treat the role of the radial density profile in the limit of a cylindrical geometry and then evaluate numerically the effect of the axial confinement in a configuration in which a hole is present in the gas density at the center of the trap. We find that in a strongly elongated trap the speed of sound in both the BCS and the unitary regime differs by a factor √(3/5) from that in a homogeneous three-dimensional superfluid. The predictions of the theory could be tested by measurements of sound-wave propagation in a setup such as that exploited by Andrews et al. [Phys. Rev. Lett. 79, 553 (1997)] for an atomic Bose-Einstein condensate

  14. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  15. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  16. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  17. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  18. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  19. Experimental implementation of a low-frequency global sound equalization method based on free field propagation

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Pedersen, Christian Sejer; Lydolf, Morten

    2007-01-01

    An experimental implementation of a global sound equalization method in a rectangular room using active control is described in this paper. The main purpose of the work has been to provide experimental evidence that sound can be equalized in a continuous three-dimensional region, the listening zone......, which occupies a considerable part of the complete volume of the room. The equalization method, based on the simulation of a progressive plane wave, was implemented in a room with inner dimensions of 2.70 m x 2.74 m x 2.40 m. With this method,the sound was reproduced by a matrix of 4 x 5 loudspeakers...... in one of the walls. After traveling through the room, the sound wave was absorbed on the opposite wall, which had a similar arrangement of loudspeakers, by means of active control. A set of 40 digital FIR filters was used to modify the original input signal before it was fed to the loudspeakers, one...

  20. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.