WorldWideScience

Sample records for heart sound localization

  1. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  2. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  3. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  4. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  5. Xinyinqin: a computer-based heart sound simulator.

    Science.gov (United States)

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  6. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  7. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  8. Heart sounds: are you listening? Part 1.

    Science.gov (United States)

    Reimer-Kent, Jocelyn

    2013-01-01

    All nurses should have an understanding of heart sounds and be proficient in cardiac auscultation. Unfortunately, this skill is not part of many nursing school curricula, nor is it necessarily a required skillfor employment. Yet, being able to listen and accurately describe heart sounds has tangible benefits to the patient, as it is an integral part of a complete cardiac assessment. In this two-part article, I will review the fundamentals of cardiac auscultation, how cardiac anatomy and physiology relate to heart sounds, and describe the various heart sounds. Whether you are a beginner or a seasoned nurse, it is never too early or too late to add this important diagnostic skill to your assessment tool kit.

  9. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  10. A system for heart sounds classification.

    Directory of Open Access Journals (Sweden)

    Grzegorz Redlarski

    Full Text Available The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases - one of the major causes of death around the globe - a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability.

  11. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  12. Research on fiber Bragg grating heart sound sensing and wavelength demodulation method

    Science.gov (United States)

    Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang

    2010-11-01

    Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.

  13. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  14. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  15. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  16. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  17. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  18. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    Science.gov (United States)

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  19. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  20. Effect of Listening to the Al-Quran on Heart Sound

    Science.gov (United States)

    Daud, N. F.; Sharif, Z.

    2018-03-01

    This paper investigates the effect on the heart sounds upon listening to the chosen verses of the Al Quran. A signal of the heart sounds is extracted using Thinklabs Phonocardiography software and then the frequency components are extracted using MATLAB 7.11.0. Frequency components during diastolic are compared for two sessions; before and during listening sessions. Diastolic is a period where the chamber of the heart is filled with the blood when the heart muscle is in a relaxed condition. From this study, it is found that the frequency of the heart sound during listening to Al-Quran is lower than the one before listening to Al-Quran. This indicates that, the state of calmness can be achieved by listening to this selected verses of the Al-Quran.

  1. Prototype electronic stethoscope vs. conventional stethoscope for auscultation of heart sounds.

    Science.gov (United States)

    Kelmenson, Daniel A; Heath, Janae K; Ball, Stephanie A; Kaafarani, Haytham M A; Baker, Elisabeth M; Yeh, Daniel D; Bittner, Edward A; Eikermann, Matthias; Lee, Jarone

    2014-08-01

    In an effort to decrease the spread of hospital-acquired infections, many hospitals currently use disposable plastic stethoscopes in patient rooms. As an alternative, this study examines a prototype electronic stethoscope that does not break the isolation barrier between clinician and patient and may also improve the diagnostic accuracy of the stethoscope exam. This study aimed to investigate whether the new prototype electronic stethoscope improved auscultation of heart sounds compared to the standard conventional isolation stethoscope. In a controlled, non-blinded, cross-over study, clinicians were randomized to identify heart sounds with both the prototype electronic stethoscope and a conventional stethoscope. The primary outcome was the score on a 10-question heart sound identification test. In total, 41 clinicians completed the study. Subjects performed significantly better in the identification of heart sounds when using the prototype electronic stethoscope (median = 9 [7-10] vs. 8 [6-9] points, p value prototype electronic stethoscope. Clinicians using a new prototype electronic stethoscope achieved greater accuracy in identification of heart sounds and also universally favoured the new device, compared to the conventional stethoscope.

  2. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  4. MECHANICAL HEART-VALVE PROSTHESES - SOUND LEVEL AND RELATED COMPLAINTS

    NARCIS (Netherlands)

    LAURENS, RRP; WIT, HP; EBELS, T

    In a randomised study, we investigated the sound production of mechanical heart valve prostheses and the complaints related to this sound. The CarboMedics, Bjork-Shiley monostrut and StJude Medical prostheses were compared. A-weighted levels of the pulse-like sound produced by the prosthesis were

  5. [Analysis of the heart sound with arrhythmia based on nonlinear chaos theory].

    Science.gov (United States)

    Ding, Xiaorong; Guo, Xingming; Zhong, Lisha; Xiao, Shouzhong

    2012-10-01

    In this paper, a new method based on the nonlinear chaos theory was proposed to study the arrhythmia with the combination of the correlation dimension and largest Lyapunov exponent, through computing and analyzing these two parameters of 30 cases normal heart sound and 30 cases with arrhythmia. The results showed that the two parameters of the heart sounds with arrhythmia were higher than those with the normal, and there was significant difference between these two kinds of heart sounds. That is probably due to the irregularity of the arrhythmia which causes the decrease of predictability, and it's more complex than the normal heart sound. Therefore, the correlation dimension and the largest Lyapunov exponent can be used to analyze the arrhythmia and for its feature extraction.

  6. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    International Nuclear Information System (INIS)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M

    2006-01-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions

  7. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M [Signal and Imaging Processing and Tele-Medicine Technology Research Group, Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)

    2006-04-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  8. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    Science.gov (United States)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  9. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  10. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  11. a New Approach to Physiologic Triggering in Medical Imaging Using Multiple Heart Sounds Alone.

    Science.gov (United States)

    Groch, Mark Walter

    A new method for physiological synchronization of medical image acquisition using both the first and second heart sound has been developed. Heart sounds gating (HSG) circuitry has been developed which identifies, individually, both the first (S1) and second (S2) heart sounds from their timing relationship alone, and provides two synchronization points during the cardiac cycle. Identification of first and second heart sounds from their timing relationship alone and application to medical imaging has, heretofore, not been performed in radiology or nuclear medicine. The heart sounds are obtained as conditioned analog signals from a piezoelectric transducer microphone placed on the patient's chest. The timing relationships between the S1 to S2 pulses and the S2 to S1 pulses are determined using a logic scheme capable of distinguishing the S1 and S2 pulses from the heart sounds themselves, using their timing relationships, and the assumption that initially the S1-S2 interval will be shorter than the S2-S1 interval. Digital logic circuitry is utilized to continually track the timing intervals and extend the S1/S2 identification to heart rates up to 200 beats per minute (where the S1-S2 interval is not shorter than the S2-S1 interval). Clinically, first heart sound gating may be performed to assess the systolic ejection portion of the cardiac cycle, with S2 gating utilized for reproduction of the diastolic filling portion of the cycle. One application of HSG used for physiologic synchronization is in multigated blood pool (MGBP) imaging in nuclear medicine. Heart sounds gating has been applied to twenty patients who underwent analysis of ventricular function in Nuclear Medicine, and compared to conventional ECG gated MGBP. Left ventricular ejection fractions calculated from MGBP studies using a S1 and a S2 heart sound trigger correlated well with conventional ECG gated acquisitions in patients adequately gated by HSG and ECG. Heart sounds gating provided superior

  12. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  13. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  14. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  15. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    Science.gov (United States)

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  16. Heart sounds analysis using probability assessment

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

    2017-01-01

    Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

  17. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  18. [Study of biometric identification of heart sound base on Mel-Frequency cepstrum coefficient].

    Science.gov (United States)

    Chen, Wei; Zhao, Yihua; Lei, Sheng; Zhao, Zikai; Pan, Min

    2012-12-01

    Heart sound is a physiological parameter with individual characteristics generated by heart beat. To do the individual classification and recognition, in this paper, we present our study of using wavelet transform in the signal denoising, with the Mel-Frequency cepstrum coefficients (MFCC) as the feature parameters, and propose a research of reducing the dimensionality through principal components analysis (PCA). We have done the preliminary study to test the feasibility of biometric identification method using heart sound. The results showed that under the selected experimental conditions, the system could reach a 90% recognition rate. This study can provide a reference for further research.

  19. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  20. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  1. Sound Heart: Spiritual Nursing Care Model from Religious Viewpoint.

    Science.gov (United States)

    Asadzandi, Minoo

    2017-12-01

    Different methods of epistemology create different philosophical views. None of the nursing theories have employed the revelational epistemology and the philosophical views of Abrahamic religions. According to Abrahamic religions, the universe and human being have been created based on God's affection. Human being should deserve the position of God's representative on earth after achieving all ethical merits. Humans have willpower to shape their destiny by choosing manner of their relationship with God, people, themselves and the whole universe. They can adopt the right behavior by giving a divine color to their thoughts and intentions and thus attain peace and serenity in their heart. Health means having a sound heart (calm spirit with a sense of hope and love, security and happiness) that is achievable through faith and piety. Moral vices lead to diseases. Human beings are able to purge their inside (heart) through establishing a relationship with God and then take actions to reform the outside world. The worlds are run by God's will based on prudence and mercy. All events happen with God's authorization, and human beings have to respond to them. Nurses should try to recognize the patient's spiritual response to illness that can appear as symptoms of an unsound heart (fear, sadness, disappointment, anger, jealousy, cruelty, grudge, suspicion, etc.) due to the pains caused by illness and then alleviate the patient's suffering by appropriate approaches. Nurses help the patient to achieve the sound heart by hope in divine mercy and love, and they help the patient see good in any evil and relieve their fear and sadness by viewing their illness positively and then attain the status of calm, satisfaction, peace and serenity in their heart and being content with the divine fate. By invitation to religious morality, the model leads the patients to spiritual health.

  2. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2015-09-01

    Full Text Available Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  3. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  4. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  5. Multi-point accelerometric detection and principal component analysis of heart sounds

    International Nuclear Information System (INIS)

    De Panfilis, S; Peccianti, M; Chiru, O M; Moroni, C; Vashkevich, V; Parisi, G; Cassone, R

    2013-01-01

    Heart sounds are a fundamental physiological variable that provide a unique insight into cardiac semiotics. However a deterministic and unambiguous association between noises in cardiac dynamics is far from being accomplished yet due to many and different overlapping events which contribute to the acoustic emission. The current computer-based capacities in terms of signal detection and processing allow one to move from the standard cardiac auscultation, even in its improved forms like electronic stethoscopes or hi-tech phonocardiography, to the extraction of information on the cardiac activity previously unexplored. In this report, we present a new equipment for the detection of heart sounds, based on a set of accelerometric sensors placed in contact with the chest skin on the precordial area, and are able to measure simultaneously the vibration induced on the chest surface by the heart's mechanical activity. By utilizing advanced algorithms for the data treatment, such as wavelet decomposition and principal component analysis, we are able to condense the spatially extended acoustic information and to provide a synthetical representation of the heart activity. We applied our approach to 30 adults, mixed per gender, age and healthiness, and correlated our results with standard echocardiographic examinations. We obtained a 93% concordance rate with echocardiography between healthy and unhealthy hearts, including minor abnormalities such as mitral valve prolapse. (fast track communication)

  6. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  7. Comparison between users of a new methodology for heart sound auscultation.

    Science.gov (United States)

    Castro, Ana; Gomes, Pedro; Mattos, Sandra S; Coimbra, Miguel T

    2016-08-01

    Auscultation is a routine exam and the first line of screening in heart pathologies. The objective of this study was to assess if using a new data collection system, the DigiScope Collector, with a guided and automatic annotation of heart auscultation, different levels of expertise/experience users could collect similar digital auscultations. Data were collected within the Heart Caravan Initiative (Paraíba, Brasil). Patients were divided into two study groups: Group 1 evaluated by a third year medical student (User 1), and an experienced nurse (User 2); Group 2 evaluated by User 2 and an Information Technology professional (User 3). Patients were auscultated sequentially by the two users, according to the randomization. Features extracted from each data set included the length (HR) of the audio files, the number of repetitions per auscultation area, heart rate, first (S1) and second (S2) heart sound amplitudes, S2/S1, and aortic (A2) and pulmonary (P2) components of the second heart sound and relative amplitudes (P2/A2). Features extracted were compared between users using paired-sample test Wilcoxon test, and Spearman correlations (Pauscultation (User 2 consistently presented longer auscultation time). Correlation analysis showed significant correlations between extracted features from both groups: S2/S1 in Group 1, and S1, S2, A2, P2, P2/A2 amplitudes, and HR in Group 2. Using the DigiScope Collector, we were able to collect similar digital auscultations, according to the features evaluated. This may indicate that in sites with limited access to specialized clinical care, auscultation files may be acquired and used in telemedicine for an expert evaluation.

  8. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  9. Sound localization in the presence of one or two distracters

    NARCIS (Netherlands)

    Langendijk, E.H.A.; Kistler, D.J.; Wightman, F.L

    2001-01-01

    Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were

  10. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    Science.gov (United States)

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All

  11. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    Science.gov (United States)

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, L Duc; Havinga, Paul J.M.

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and

  13. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  14. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  15. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  16. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  17. Spherical loudspeaker array for local active control of sound.

    Science.gov (United States)

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  18. Heart murmurs

    Science.gov (United States)

    Chest sounds - murmurs; Heart sounds - abnormal; Murmur - innocent; Innocent murmur; Systolic heart murmur; Diastolic heart murmur ... The heart has 4 chambers: Two upper chambers (atria) Two lower chambers (ventricles) The heart has valves that close ...

  19. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  20. Improvement of directionality and sound-localization by internal ear coupling in barn owls

    DEFF Research Database (Denmark)

    Wagner, Hermann; Christensen-Dalsgaard, Jakob; Kettler, Lutz

    Mark Konishi was one of the first to quantify sound-localization capabilities in barn owls. He showed that frequencies between 3 and 10 kHz underlie precise sound localization in these birds, and that they derive spatial information from processing interaural time and interaural level differences....... However, despite intensive research during the last 40 years it is still unclear whether and how internal ear coupling contributes to sound localization in the barn owl. Here we investigated ear directionality in anesthetized birds with the help of laser vibrometry. Care was taken that anesthesia...... time difference in the low-frequency range, barn owls hesitate to approach prey or turn their heads when only low-frequency auditory information is present in a stimulus they receive. Thus, the barn-owl's sound localization system seems to be adapted to work best in frequency ranges where interaural...

  1. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  2. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    Science.gov (United States)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  3. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  4. The correlation between the first heart sound and cardiac output as measured by using digital esophageal stethoscope under anaesthesia.

    Science.gov (United States)

    Duck Shin, Young; Hoon Yim, Kyoung; Hi Park, Sang; Wook Jeon, Yong; Ho Bae, Jin; Soo Lee, Tae; Hwan Kim, Myoung; Jin Choi, Young

    2014-03-01

    The use of an esophageal stethoscope is a basic heart sounds monitoring procedure performed in patients under general anesthesia. As the size of the first heart sound can express the left ventricle function, its correlation with cardiac output should be investigated. The aim of this study was to investigate the effects of cardiac output (CO) on the first heart sound (S1) amplitude. Methods : Six male beagles were chosen. The S1 was obtained with the newly developed esophageal stethoscope system. CO was measured using NICOM, a non-invasive CO measuring device. Ephedrine and beta blockers were administered to the subjects to compare changes in figures, and the change from using an inhalation anesthetic was also compared. The S1 amplitude displayed positive correlation with the change rate of CO (r = 0.935, p < 0.001). The heart rate measured using the esophageal stethoscope and ECG showed considerably close figures through the Bland-Altman plot and showed a high positive correlation (r = 0.988, p < 0,001). In beagles, the amplitude of S1 had a significant correlation with changes in CO in a variety of situations.

  5. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  6. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  7. Variation in effectiveness of a cardiac auscultation training class with a cardiology patient simulator among heart sounds and murmurs.

    Science.gov (United States)

    Kagaya, Yutaka; Tabata, Masao; Arata, Yutaro; Kameoka, Junichi; Ishii, Seiichi

    2017-08-01

    Effectiveness of simulation-based education in cardiac auscultation training is controversial, and may vary among a variety of heart sounds and murmurs. We investigated whether a single auscultation training class using a cardiology patient simulator for medical students provides competence required for clinical clerkship, and whether students' proficiency after the training differs among heart sounds and murmurs. A total of 324 fourth-year medical students (93-117/year for 3 years) were divided into groups of 6-8 students; each group participated in a three-hour training session using a cardiology patient simulator. After a mini-lecture and facilitated training, each student took two different tests. In the first test, they tried to identify three sounds of Category A (non-split, respiratory split, and abnormally wide split S2s) in random order, after being informed that they were from Category A. They then did the same with sounds of Category B (S3, S4, and S3+S4) and Category C (four heart murmurs). In the second test, they tried to identify only one from each of the three categories in random order without any category information. The overall accuracy rate declined from 80.4% in the first test to 62.0% in the second test (pauscultation training. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  8. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  9. Usefulness of the second heart sound for predicting pulmonary hypertension in patients with interstitial lung disease

    Directory of Open Access Journals (Sweden)

    Sandra de Barros Cobra

    Full Text Available CONTEXT AND OBJECTIVE: P2 hyperphonesis is considered to be a valuable finding in semiological diagnoses of pulmonary hypertension (PH. The aim here was to evaluate the accuracy of the pulmonary component of second heart sounds for predicting PH in patients with interstitial lung disease. DESIGN AND SETTING: Cross-sectional study at the University of Brasilia and Hospital de Base do Distrito Federal. METHODS: Heart sounds were acquired using an electronic stethoscope and were analyzed using phonocardiography. Clinical signs suggestive of PH, such as second heart sound (S2 in pulmonary area louder than in aortic area; P2 > A2 in pulmonary area and P2 present in mitral area, were compared with Doppler echocardiographic parameters suggestive of PH. Sensitivity (S, specificity (Sp and positive (LR+ and negative (LR- likelihood ratios were evaluated. RESULTS: There was no significant correlation between S2 or P2 amplitude and PASP (pulmonary artery systolic pressure (P = 0.185 and 0.115; P= 0.13 and 0.34, respectively. Higher S2 in pulmonary area than in aortic area, compared with all the criteria suggestive of PH, showed S = 60%, Sp= 22%; LR+ = 0.7; LR- = 1.7; while P2> A2 showed S= 57%, Sp = 39%; LR+ = 0.9; LR- = 1.1; and P2 in mitral area showed: S= 68%, Sp = 41%; LR+ = 1.1; LR- = 0.7. All these signals together showed: S= 50%, Sp = 56%. CONCLUSIONS: The semiological signs indicative of PH presented low sensitivity and specificity levels for clinically diagnosing this comorbidity.

  10. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  11. A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals.

    Directory of Open Access Journals (Sweden)

    Frédéric Gougoux

    2005-02-01

    Full Text Available Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged, the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.

  12. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    Science.gov (United States)

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  13. How to generate a sound-localization map in fish

    Science.gov (United States)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  14. Physiologic consequences of local heart irradiation in rats

    International Nuclear Information System (INIS)

    Geist, B.J.; Lauk, S.; Bornhausen, M.; Trott, K.R.

    1990-01-01

    Noninvasive methods have been used to study the long-term cardiovascular and pulmonary functional changes at rest and after exercise in adult rats following local heart irradiation with single x-ray doses of 15, 17.5 or 20 Gy, and in non-irradiated control animals. Rats that had undergone a chronic exercise program were compared with untrained cohorts. The earliest dysfunction detected was an increased respiratory rate (f) at 10 weeks after irradiation in the highest dose group. In contrast, both telemetric heart-rate (HR) and rhythm and indirect systolic blood pressure measurements performed at rest only revealed changes starting at 43 weeks after irradiation with 20 Gy, up to which point the rats showed no clinical signs of heart failure. However, the number of minutes required for the recovery of the HR to pre-exercise levels following the implementation of a standardized exercise challenge was elevated in untrained rats compared with their trained cohorts at 18 weeks after irradiation with 20 Gy. Increases in recovery times were required in the two lowest dose groups, starting at 26 weeks after irradiation. It was concluded that the reserve capacity of the cardiopulmonary system masks functional decrements at rest for many months following local heart irradiation, necessitating the use of techniques which reveal reductions in reserve capacities. Further, the influence of local irradiation to the heart and lungs deserves closer scrutiny due to mutual interactions

  15. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  16. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  17. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  18. Localization of self-generated synthetic footstep sounds on different walked-upon materials through headphones

    DEFF Research Database (Denmark)

    Turchet, Luca; Spagnol, Simone; Geronazzo, Michele

    2016-01-01

    typologies of surface materials: solid (e.g., wood) and aggregate (e.g., gravel). Different sound delivery methods (mono, stereo, binaural) as well as several surface materials, in presence or absence of concurrent contextual auditory information provided as soundscapes, were evaluated in a vertical...... localization task. Results showed that solid surfaces were localized significantly farther from the walker's feet than the aggregate ones. This effect was independent of the used rendering technique, of the presence of soundscapes, and of merely temporal or spectral attributes of sound. The effect...

  19. Localization of Simultaneous Moving Sound Sources for Mobile Robot Using a Frequency-Domain Steered Beamformer Approach

    OpenAIRE

    Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean

    2016-01-01

    Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...

  20. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  1. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  2. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    Science.gov (United States)

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  3. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    Science.gov (United States)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  4. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  5. Crowing Sound Analysis of Gaga' Chicken; Local Chicken from South Sulawesi Indonesia

    OpenAIRE

    Aprilita Bugiwati, Sri Rachma; Ashari, Fachri

    2008-01-01

    Gaga??? chicken was known as a local chicken at South Sulawesi Indonesia which has unique, specific, and different crowing sound, especially at the ending of crowing sound which is like the voice character of human laughing, comparing with the other types of singing chicken in the world. 287 birds of Gaga??? chicken at 3 districts at the centre habitat of Gaga??? chicken were separated into 2 groups (163 birds of Dangdut type and 124 birds of Slow type) which is based on the speed...

  6. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    Science.gov (United States)

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  7. The effect of multimicrophone noise reduction systems on sound source localization by users of binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2008-07-01

    This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.

  8. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    Directory of Open Access Journals (Sweden)

    Youngwoong Kim

    2015-11-01

    Full Text Available The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body.

  9. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  10. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  11. ICE on the road to auditory sensitivity reduction and sound localization in the frog.

    Science.gov (United States)

    Narins, Peter M

    2016-10-01

    Frogs and toads are capable of producing calls at potentially damaging levels that exceed 110 dB SPL at 50 cm. Most frog species have internally coupled ears (ICE) in which the tympanic membranes (TyMs) communicate directly via the large, permanently open Eustachian tubes, resulting in an inherently directional asymmetrical pressure-difference receiver. One active mechanism for auditory sensitivity reduction involves the pressure increase during vocalization that distends the TyM, reducing its low-frequency airborne sound sensitivity. Moreover, if sounds generated by the vocal folds arrive at both surfaces of the TyM with nearly equal amplitudes and phases, the net motion of the eardrum would be greatly attenuated. Both of these processes appear to reduce the motion of the frog's TyM during vocalizations. The implications of ICE in amphibians with respect to sound localizations are discussed, and the particularly interesting case of frogs that use ultrasound for communication yet exhibit exquisitely small localization jump errors is brought to light.

  12. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  13. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  14. Do you hear where I hear?: Isolating the individualized sound localization cues.

    Directory of Open Access Journals (Sweden)

    Griffin David Romigh

    2014-12-01

    Full Text Available It is widely acknowledged that individualized head-related transfer function (HRTF measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250-ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.

  15. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  16. Beat-to-beat systolic time-interval measurement from heart sounds and ECG

    International Nuclear Information System (INIS)

    Paiva, R P; Carvalho, P; Couceiro, R; Henriques, J; Antunes, M; Quintal, I; Muehlsteff, J

    2012-01-01

    Systolic time intervals are highly correlated to fundamental cardiac functions. Several studies have shown that these measurements have significant diagnostic and prognostic value in heart failure condition and are adequate for long-term patient follow-up and disease management. In this paper, we investigate the feasibility of using heart sound (HS) to accurately measure the opening and closing moments of the aortic heart valve. These moments are crucial to define the main systolic timings of the heart cycle, i.e. pre-ejection period (PEP) and left ventricular ejection time (LVET). We introduce an algorithm for automatic extraction of PEP and LVET using HS and electrocardiogram. PEP is estimated with a Bayesian approach using the signal's instantaneous amplitude and patient-specific time intervals between atrio-ventricular valve closure and aortic valve opening. As for LVET, since the aortic valve closure corresponds to the start of the S2 HS component, we base LVET estimation on the detection of the S2 onset. A comparative assessment of the main systolic time intervals is performed using synchronous signal acquisitions of the current gold standard in cardiac time-interval measurement, i.e. echocardiography, and HS. The algorithms were evaluated on a healthy population, as well as on a group of subjects with different cardiovascular diseases (CVD). In the healthy group, from a set of 942 heartbeats, the proposed algorithm achieved 7.66 ± 5.92 ms absolute PEP estimation error. For LVET, the absolute estimation error was 11.39 ± 8.98 ms. For the CVD population, 404 beats were used, leading to 11.86 ± 8.30 and 17.51 ± 17.21 ms absolute PEP and LVET errors, respectively. The results achieved in this study suggest that HS can be used to accurately estimate LVET and PEP. (paper)

  17. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  18. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    Science.gov (United States)

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  19. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: improved directional hearing with a bone-conduction device.

    Science.gov (United States)

    Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John

    2012-04-01

    Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.

  20. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  1. Development of the sound localization cues in cats

    Science.gov (United States)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies 10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  2. Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem

    DEFF Research Database (Denmark)

    Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E

    2017-01-01

    The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....

  3. Feasibility of heart sounds measurements from an accelerometer within an ICD pulse generator.

    Science.gov (United States)

    Siejko, Krzysztof Z; Thakur, Pramodsingh H; Maile, Keith; Patangay, Abhilash; Olivari, Maria-Teresa

    2013-03-01

    The feasibility of detecting heart sounds (HS) from an accelerometer sensor enclosed within an implantable cardioverter defibrillator (ICD) pulse generator (PG) was explored in a noninvasive pilot study on heart failure (HF) patients with audible third HS (S3). Accelerometer circuitry enhanced for HS was incorporated into non-functional ICDs. A study was conducted on 30 HF patients and 10 normal subjects without history of cardiac disease. The devices were taped to the skin surface over both left and right pectoral regions to simulate subcutaneous implants. A lightweight reference accelerometer was taped over the cardiac apex. Waveforms were recorded simultaneously with a surface electrocardiogram for 2 minutes. Algorithms were developed to perform off-line automatic detection of HS and HS time intervals (HSTIs). S1, S2, and S3 vibrations were detected in all accelerometer locations for all 40 subjects, including 16 subjects without an audible S3. A substantial proportion of S3 energy was infrasonic (remote ambulatory monitoring of HF progression and the detection of the onset of HF decompensation. ©2012, The Authors. Journal compilation ©2012 Wiley Periodicals, Inc.

  4. Detection of Heart Sounds in Children with and without Pulmonary Arterial Hypertension--Daubechies Wavelets Approach.

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    Full Text Available Automatic detection of the 1st (S1 and 2nd (S2 heart sounds is difficult, and existing algorithms are imprecise. We sought to develop a wavelet-based algorithm for the detection of S1 and S2 in children with and without pulmonary arterial hypertension (PAH.Heart sounds were recorded at the second left intercostal space and the cardiac apex with a digital stethoscope simultaneously with pulmonary arterial pressure (PAP. We developed a Daubechies wavelet algorithm for the automatic detection of S1 and S2 using the wavelet coefficient 'D6' based on power spectral analysis. We compared our algorithm with four other Daubechies wavelet-based algorithms published by Liang, Kumar, Wang, and Zhong. We annotated S1 and S2 from an audiovisual examination of the phonocardiographic tracing by two trained cardiologists and the observation that in all subjects systole was shorter than diastole.We studied 22 subjects (9 males and 13 females, median age 6 years, range 0.25-19. Eleven subjects had a mean PAP < 25 mmHg. Eleven subjects had PAH with a mean PAP ≥ 25 mmHg. All subjects had a pulmonary artery wedge pressure ≤ 15 mmHg. The sensitivity (SE and positive predictivity (+P of our algorithm were 70% and 68%, respectively. In comparison, the SE and +P of Liang were 59% and 42%, Kumar 19% and 12%, Wang 50% and 45%, and Zhong 43% and 53%, respectively. Our algorithm demonstrated robustness and outperformed the other methods up to a signal-to-noise ratio (SNR of 10 dB. For all algorithms, detection errors arose from low-amplitude peaks, fast heart rates, low signal-to-noise ratio, and fixed thresholds.Our algorithm for the detection of S1 and S2 improves the performance of existing Daubechies-based algorithms and justifies the use of the wavelet coefficient 'D6' through power spectral analysis. Also, the robustness despite ambient noise may improve real world clinical performance.

  5. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    Localizing sounds with different frequency and time domain characteristics in a dynamic listening environment is a challenging task that has not been explored in the field of robotics as much as other perceptual tasks...

  6. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Kotaro Hoshiba

    2017-11-01

    Full Text Available In search and rescue activities, unmanned aerial vehicles (UAV should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  7. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  8. Mice Lacking the Alpha9 Subunit of the Nicotinic Acetylcholine Receptor Exhibit Deficits in Frequency Difference Limens and Sound Localization

    Directory of Open Access Journals (Sweden)

    Amanda Clause

    2017-06-01

    Full Text Available Sound processing in the cochlea is modulated by cholinergic efferent axons arising from medial olivocochlear neurons in the brainstem. These axons contact outer hair cells in the mature cochlea and inner hair cells during development and activate nicotinic acetylcholine receptors composed of α9 and α10 subunits. The α9 subunit is necessary for mediating the effects of acetylcholine on hair cells as genetic deletion of the α9 subunit results in functional cholinergic de-efferentation of the cochlea. Cholinergic modulation of spontaneous cochlear activity before hearing onset is important for the maturation of central auditory circuits. In α9KO mice, the developmental refinement of inhibitory afferents to the lateral superior olive is disturbed, resulting in decreased tonotopic organization of this sound localization nucleus. In this study, we used behavioral tests to investigate whether the circuit anomalies in α9KO mice correlate with sound localization or sound frequency processing. Using a conditioned lick suppression task to measure sound localization, we found that three out of four α9KO mice showed impaired minimum audible angles. Using a prepulse inhibition of the acoustic startle response paradigm, we found that the ability of α9KO mice to detect sound frequency changes was impaired, whereas their ability to detect sound intensity changes was not. These results demonstrate that cholinergic, nicotinic α9 subunit mediated transmission in the developing cochlear plays an important role in the maturation of hearing.

  9. Towards a Synesthesia Laboratory: Real-time Localization and Visualization of a Sound Source for Virtual Reality Applications

    OpenAIRE

    Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina

    2018-01-01

    In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...

  10. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Mutation in the kv3.3 voltage-gated potassium channel causing spinocerebellar ataxia 13 disrupts sound-localization mechanisms.

    Directory of Open Access Journals (Sweden)

    John C Middlebrooks

    Full Text Available Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13. This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.

  12. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    sound recordings could be used to identify sites of local airway inflammation. Keywords: airway obstruction, expiration sound pressure level, inspiration sound pressure level, expiration-to-inspiration sound pressure ratio, 7-point analysis

  13. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  14. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  15. The natural history of sound localization in mammals--a story of neuronal inhibition.

    Science.gov (United States)

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  16. Acoustic heart. Interpretation of Phonocardiograms by computer

    International Nuclear Information System (INIS)

    Granados, J; Tavera, F; Velázquez, J M; Hernández, R T; Morales, A; López, G

    2015-01-01

    In the field of Cardiology have been identified several heart pathologies associated with problems in valves and narrowing in veins. Each case is associated with a specific sound emitted by the heart, detected in cardiac auscultation. On the Phonocardiogram, sound is visualized as a peak in the wave. In the Optics Laboratory of the Universidad Autonoma Metropolitana – Azcapotzalco, we have developed a simulation of the Phonocardiograms of heart sounds associated with the main pathologies and a computer program of recognition of images that allows you to quickly identify the respective diseases. This is a novel way to analyze Phonocardiograms and the foundation for building a portable non-invasive cardiac diagnostic computerized analyzer system

  17. A novel method for direct localized sound speed measurement using the virtual source paradigm

    DEFF Research Database (Denmark)

    Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt

    2007-01-01

    ) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...

  18. Screening Tests for Women Who Have Heart Disease

    Science.gov (United States)

    ... Based Toolkit Logo Campaign Materials The Healthy Heart Handbook for Women FOR WOMEN WHO HAVE HEART DISEASE ... taken up by the heart muscle. Echocardiography changes sound waves into pictures that show the heart's size, ...

  19. Evaluation of heart rhythm variability and arrhythmia in children with systemic and localized scleroderma.

    Science.gov (United States)

    Wozniak, Jacek; Dabrowski, Rafal; Luczak, Dariusz; Kwiatkowska, Malgorzata; Musiej-Nowakowska, Elzbieta; Kowalik, Ilona; Szwed, Hanna

    2009-01-01

    To evaluate possible disturbances in autonomic regulation and cardiac arrhythmias in children with localized and systemic scleroderma. There were 40 children included in the study: 20 with systemic and 20 with localized scleroderma. The control group comprised 20 healthy children. In 24-hour Holter recording, the average rate of sinus rhythm was significantly higher in the groups with systemic and localized scleroderma than in the control group, but there was no significant difference between them. The variability of heart rhythm in both groups was significantly decreased. In the group with systemic scleroderma, single supraventricular ectopic beats were observed in 20% and runs were seen in 40% of patients. In the group with localized scleroderma, supraventricular single ectopic beats occurred in 35% of patients and runs in 45% of those studied. Ventricular arrhythmia occurred in 2 children with systemic scleroderma, but in 1 child, it was complex. The most frequent cardiac arrhythmias in both types of scleroderma in children were of supraventricular origin, whereas ventricular arrhythmias did not occur very often. There were no significant differences in autonomic disturbances manifesting as a higher heart rate and decreased heart rate variability between localized and systemic scleroderma.

  20. Local Delivery of Fluorescent Dye For Fiber-Optics Confocal Microscopy of the Living Heart

    Directory of Open Access Journals (Sweden)

    Chao eHuang

    2014-09-01

    Full Text Available Fiber-optics confocal microscopy (FCM is an emerging imaging technology with various applications in basic research and clinical diagnosis. FCM allows for real-time in situ microscopy of tissue at sub-cellular scale. Recently FCM has been investigated for cardiac imaging, in particular, for discrimination of cardiac tissue during pediatric open-heart surgery. FCM relies on fluorescent dyes. The current clinical approach of dye delivery is based on systemic injection, which is associated with high dye consumption and adverse clinical events. In this study, we investigated approaches for local dye delivery during FCM imaging based on dye carriers attached to the imaging probe. Using three-dimensional confocal microscopy, automated bench tests, and FCM imaging we quantitatively characterized dye release of carriers composed of open-pore foam only and foam loaded with agarose hydrogel. In addition, we compared local dye delivery with a model of systemic dye delivery in the isolated perfused rodent heart. We measured the signal-to-noise ratio of images acquired in various regions of the heart. Our evaluations showed that foam-agarose dye carriers exhibited a prolonged dye release versus foam-only carriers. Foam-agarose dye carriers allowed reliable imaging of 5-9 lines, which is comparable to 4-8 min of continuous dye release. Our study in the living heart revealed that the SNR of FCM images using local and systemic dye delivery is not different. However, we observed differences in the imaged tissue microstructure with the two approaches. Structural features characteristic of microvasculature were solely observed for systemic dye delivery. Our findings suggest that local dye delivery approach for FCM imaging constitutes an important alternative to systemic dye delivery. We suggest that the approach for local dye delivery will facilitate clinical translation of FCM, for instance, for FCM imaging during pediatric heart surgery.

  1. Local delivery of fluorescent dye for fiber-optics confocal microscopy of the living heart.

    Science.gov (United States)

    Huang, Chao; Kaza, Aditya K; Hitchcock, Robert W; Sachse, Frank B

    2014-01-01

    Fiber-optics confocal microscopy (FCM) is an emerging imaging technology with various applications in basic research and clinical diagnosis. FCM allows for real-time in situ microscopy of tissue at sub-cellular scale. Recently FCM has been investigated for cardiac imaging, in particular, for discrimination of cardiac tissue during pediatric open-heart surgery. FCM relies on fluorescent dyes. The current clinical approach of dye delivery is based on systemic injection, which is associated with high dye consumption, and adverse clinical events. In this study, we investigated approaches for local dye delivery during FCM imaging based on dye carriers attached to the imaging probe. Using three-dimensional confocal microscopy, automated bench tests, and FCM imaging we quantitatively characterized dye release of carriers composed of open-pore foam only and foam loaded with agarose hydrogel. In addition, we compared local dye delivery with a model of systemic dye delivery in the isolated perfused rodent heart. We measured the signal-to-noise ratio (SNR) of images acquired in various regions of the heart. Our evaluations showed that foam-agarose dye carriers exhibited a prolonged dye release vs. foam-only carriers. Foam-agarose dye carriers allowed reliable imaging of 5-9 lines, which is comparable to 4-8 min of continuous dye release. Our study in the living heart revealed that the SNR of FCM images using local and systemic dye delivery is not different. However, we observed differences in the imaged tissue microstructure with the two approaches. Structural features characteristic of microvasculature were solely observed for systemic dye delivery. Our findings suggest that local dye delivery approach for FCM imaging constitutes an important alternative to systemic dye delivery. We suggest that the approach for local dye delivery will facilitate clinical translation of FCM, for instance, for FCM imaging during pediatric heart surgery.

  2. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  3. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    Science.gov (United States)

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  4. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  5. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  6. The natural history of sound localization in mammals – a story of neuronal inhibition

    Directory of Open Access Journals (Sweden)

    Benedikt eGrothe

    2014-10-01

    Full Text Available Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  7. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  8. Effects of Active and Passive Hearing Protection Devices on Sound Source Localization, Speech Recognition, and Tone Detection.

    Directory of Open Access Journals (Sweden)

    Andrew D Brown

    Full Text Available Hearing protection devices (HPDs such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement environments where auditory situational awareness is imperative. Here we assessed (1 sound source localization accuracy using a head-turning paradigm, (2 speech-in-noise recognition using a modified version of the QuickSIN test, and (3 tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive, including two new and previously untested devices. Relative to unoccluded (control performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.

  9. On the influence of microphone array geometry on HRTF-based Sound Source Localization

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...

  10. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  11. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  12. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  13. A "looming bias" in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization.

    Science.gov (United States)

    McCarthy, Lisa; Olsen, Kirk N

    2017-01-01

    Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.

  14. Classification of heart valve condition using acoustic measurements

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Prosthetic heart valves and the many great strides in valve design have been responsible for extending the life spans of many people with serious heart conditions. Even though the prosthetic valves are extremely reliable, they are eventually susceptible to long-term fatigue and structural failure effects expected from mechanical devices operating over long periods of time. The purpose of our work is to classify the condition of in vivo Bjork-Shiley Convexo-Concave (BSCC) heart valves by processing acoustic measurements of heart valve sounds. The structural failures of interest for Bscc valves is called single leg separation (SLS). SLS can occur if the outlet strut cracks and separates from the main structure of the valve. We measure acoustic opening and closing sounds (waveforms) using high sensitivity contact microphones on the patient`s thorax. For our analysis, we focus our processing and classification efforts on the opening sounds because they yield direct information about outlet strut condition with minimal distortion caused by energy radiated from the valve disc.

  15. Estimation of the second heart sound split using windowed sinusoidal models

    DEFF Research Database (Denmark)

    Sæderup, Rasmus Gundorf; Hoang, Poul; Winther, Simon

    2018-01-01

    to the potential overlap between A2 and P2. In this paper, a model-based approach is proposed where both A2 and P2 are modeled as windowed sinusoids with their sum forming the S2 signal. Estimation of the model parameters and the S2 split form a non-convex optimization problem, where a local minimum is obtained...... using a sequential optimization procedure. First, the window parameters are found as the solution to a regularized least squares problem. Then, the frequencies and phases of the sinusoids are found by locating the maximal peaks of the heart signals’ frequency magnitudes, and using the corresponding...

  16. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  17. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  18. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  19. Phantom Tumor of the Lung: Localized Interlobar Effusion in Congestive Heart Failure

    Directory of Open Access Journals (Sweden)

    Mislav Lozo

    2014-01-01

    Full Text Available Localized interlobar effusions in congestive heart failure (phantom or vanishing lung tumor/s is/are uncommon but well known entities. An 83-year-old man presented with shortness of breath, swollen legs, and dry cough enduring five days. Chest-X-ray (CXR revealed massive sharply demarked round/oval homogeneous dense shadow 10 × 7 cm in size in the right inferior lobe. The treatment with the loop diuretics and fluid intake reduction resulted in complete resolution of the observed round/oval tumor-like image on the control CXR three days later. Radiologic appearance of such a mass-like configuration in patients with congestive heart failure demands correction of the underlying heart condition before further diagnostic investigation is performed to avoid unnecessary, expensive, and possibly harmful diagnostic and treatment errors.

  20. Determination of heart rate variability with an electronic stethoscope.

    Science.gov (United States)

    Kamran, Haroon; Naggar, Isaac; Oniyuke, Francisca; Palomeque, Mercy; Chokshi, Priya; Salciccioli, Louis; Stewart, Mark; Lazar, Jason M

    2013-02-01

    Heart rate variability (HRV) is widely used to characterize cardiac autonomic function by measuring beat-to-beat alterations in heart rate. Decreased HRV has been found predictive of worse cardiovascular (CV) outcomes. HRV is determined from time intervals between QRS complexes recorded by electrocardiography (ECG) for several minutes to 24 h. Although cardiac auscultation with a stethoscope is performed routinely on patients, the human ear cannot detect heart sound time intervals. The electronic stethoscope digitally processes heart sounds, from which cardiac time intervals can be obtained. Accordingly, the objective of this study was to determine the feasibility of obtaining HRV from electronically recorded heart sounds. We prospectively studied 50 subjects with and without CV risk factors/disease and simultaneously recorded single lead ECG and heart sounds for 2 min. Time and frequency measures of HRV were calculated from R-R and S1-S1 intervals and were compared using intra-class correlation coefficients (ICC). The majority of the indices were strongly correlated (ICC 0.73-1.0), while the remaining indices were moderately correlated (ICC 0.56-0.63). In conclusion, we found HRV measures determined from S1-S1 are in agreement with those determined by single lead ECG, and we demonstrate and discuss differences in the measures in detail. In addition to characterizing cardiac murmurs and time intervals, the electronic stethoscope holds promise as a convenient low-cost tool to determine HRV in the hospital and outpatient settings as a practical extension of the physical examination.

  1. Female listeners’ autonomic responses to dramatic shifts between loud and soft music/sound passages: a study of heavy metal songs

    Directory of Open Access Journals (Sweden)

    Tzu-Han eCheng

    2016-02-01

    Full Text Available Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listener’s heart rate in response to the following music passage. These findings have potential implications for future research of the temporal dynamics of musical emotions.

  2. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  3. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    Science.gov (United States)

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  4. Training auscultatory skills: computer simulated heart sounds or additional bedside training? A randomized trial on third-year medical students

    Science.gov (United States)

    2010-01-01

    Background The present study compares the value of additional use of computer simulated heart sounds, to conventional bedside auscultation training, on the cardiac auscultation skills of 3rd year medical students at Oslo University Medical School. Methods In addition to their usual curriculum courses, groups of seven students each were randomized to receive four hours of additional auscultation training either employing a computer simulator system or adding on more conventional bedside training. Cardiac auscultation skills were afterwards tested using live patients. Each student gave a written description of the auscultation findings in four selected patients, and was rewarded from 0-10 points for each patient. Differences between the two study groups were evaluated using student's t-test. Results At the auscultation test no significant difference in mean score was found between the students who had used additional computer based sound simulation compared to additional bedside training. Conclusions Students at an early stage of their cardiology training demonstrated equal performance of cardiac auscultation whether they had received an additional short auscultation course based on computer simulated training, or had had additional bedside training. PMID:20082701

  5. Auditory disorders and acquisition of the ability to localize sound in children born to HIV-positive mothers

    Directory of Open Access Journals (Sweden)

    Carla Gentile Matas

    Full Text Available The objective of the present study was to evaluate children born to HIV-infected mothers and to determine whether such children present auditory disorders or poor acquisition of the ability to localize sound. The population studied included 143 children (82 males and 61 females, ranging in age from one month to 30 months. The children were divided into three groups according to the classification system devised in 1994 by the Centers for Disease Control and Prevention: infected; seroreverted; and exposed. The children were then submitted to audiological evaluation, including behavioral audiometry, visual reinforcement audiometry and measurement of acoustic immittance. Statistical analysis showed that the incidence of auditory disorders was significantly higher in the infected group. In the seroreverted and exposed groups, there was a marked absence of auditory disorders. In the infected group as a whole, the findings were suggestive of central auditory disorders. Evolution of the ability to localize sound was found to be poorer among the children in the infected group than among those in the seroreverted and exposed groups.

  6. Prognostic value of the physical examination in patients with heart failure and atrial fibrillation: insights from the AF-CHF trial (atrial fibrillation and chronic heart failure).

    Science.gov (United States)

    Caldentey, Guillem; Khairy, Paul; Roy, Denis; Leduc, Hugues; Talajic, Mario; Racine, Normand; White, Michel; O'Meara, Eileen; Guertin, Marie-Claude; Rouleau, Jean L; Ducharme, Anique

    2014-02-01

    This study sought to assess the prognostic value of physical examination in a modern treated heart failure population. The physical examination is the cornerstone of the evaluation and monitoring of patients with heart failure. Yet, the prognostic value of congestive signs (i.e., peripheral edema, jugular venous distension, a third heart sound, and pulmonary rales) has not been assessed in the current era. A post-hoc analysis was conducted on all 1,376 patients, 81% male, mean age 67 ± 11 years, with symptomatic left ventricular systolic dysfunction enrolled in the AF-CHF (Atrial Fibrillation and Congestive Heart Failure) trial. The prognostic value of baseline physical examination findings was assessed in univariate and multivariate Cox regression analyses. Peripheral edema was observed in 425 (30.9%), jugular venous distension in 297 (21.6%), a third heart sound in 207 (15.0%), and pulmonary rales in 178 (12.9%) patients. Death from cardiovascular causes occurred in 357 (25.9%) patients over a mean follow-up of 37 ± 19 months. All 4 physical examination findings were associated with cardiovascular mortality in univariate analyses (all p values examination (i.e., peripheral edema, jugular venous distension, a third heart sound, and pulmonary rales) continue to provide important prognostic information in patients with congestive heart failure. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  7. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba as demonstrated by virtual ruff removal.

    Directory of Open Access Journals (Sweden)

    Laura Hausmann

    Full Text Available BACKGROUND: When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs, which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD, interaural intensity differences (ILD, and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. METHODOLOGY/PRINCIPAL FINDINGS: HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. CONCLUSIONS/SIGNIFICANCE: The facial ruff a improves azimuthal sound localization by increasing the ITD range and b improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the

  8. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    International Nuclear Information System (INIS)

    Wetterling, F; Liehr, M; Haueisen, J; Schimpf, P; Liu, H

    2009-01-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  9. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    Science.gov (United States)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  10. Enhanced Soundings for Local Coupling Studies Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Craig R [University at Albany, State University of New York; Santanello, Joseph A [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Gentine, Pierre [Columbia Univ., New York, NY (United States)

    2016-04-01

    This document presents initial analyses of the enhanced radiosonde observations obtained during the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Enhanced Soundings for Local Coupling Studies Field Campaign (ESLCS), which took place at the ARM Southern Great Plains (SGP) Central Facility (CF) from June 15 to August 31, 2015. During ESLCS, routine 4-times-daily radiosonde measurements at the ARM-SGP CF were augmented on 12 days (June 18 and 29; July 11, 14, 19, and 26; August 15, 16, 21, 25, 26, and 27) with daytime 1-hourly radiosondes and 10-minute ‘trailer’ radiosondes every 3 hours. These 12 intensive operational period (IOP) days were selected on the basis of prior-day qualitative forecasts of potential land-atmosphere coupling strength. The campaign captured 2 dry soil convection advantage days (June 29 and July 14) and 10 atmospherically controlled days. Other noteworthy IOP events include: 2 soil dry-down sequences (July 11-14-19 and August 21-25-26), a 2-day clear-sky case (August 15-16), and the passing of Tropical Storm Bill (June 18). To date, the ESLCS data set constitutes the highest-temporal-resolution sampling of the evolution of the daytime planetary boundary layer (PBL) using radiosondes at the ARM-SGP. The data set is expected to contribute to: 1) improved understanding and modeling of the diurnal evolution of the PBL, particularly with regard to the role of local soil wetness, and (2) new insights into the appropriateness of current ARM-SGP CF thermodynamic sampling strategies.

  11. Pain Perception: Computerized versus Traditional Local Anesthesia in Pediatric Patients.

    Science.gov (United States)

    Mittal, M; Kumar, A; Srivastava, D; Sharma, P; Sharma, S

    2015-01-01

    Local anesthetic injection is one of the most anxiety- provoking procedure for both children and adult patients in dentistry. A computerized system for slow delivery of local anesthetic has been developed as a possible solution to reduce the pain related to the local anesthetic injection. The present study was conducted to evaluate and compare pain perception rates in pediatric patients with computerized system and traditional methods, both objectively and subjectively. It was a randomized controlled study in one hundred children aged 8-12 years in healthy physical and mental state, assessed as being cooperative, requiring extraction of maxillary primary molars. Children were divided into two groups by random sampling - Group A received buccal and palatal infiltration injection using Wand, while Group B received buccal and palatal infiltration using traditional syringe. Visual Analog scale (VAS) was used for subjective evaluation of pain perception by patient. Sound, Eye, Motor (SEM) scale was used as an objective method where sound, eye and motor reactions of patient were observed and heart rate measurement using pulse oximeter was used as the physiological parameter for objective evaluation. Patients experienced significantly less pain of injection with the computerized method during palatal infiltration, while less pain was not statistically significant during buccal infiltration. Heart rate increased during both buccal and palatal infiltration in traditional and computerized local anesthesia, but difference between traditional and computerized method was not statistically significant. It was concluded that pain perception was significantly more during traditional palatal infiltration injection as compared to computerized palatal infiltration, while there was no difference in pain perception during buccal infiltration in both the groups.

  12. Heart murmur detection based on wavelet transformation and a synergy between artificial neural network and modified neighbor annealing methods.

    Science.gov (United States)

    Eslamizadeh, Gholamhossein; Barati, Ramin

    2017-05-01

    Early recognition of heart disease plays a vital role in saving lives. Heart murmurs are one of the common heart problems. In this study, Artificial Neural Network (ANN) is trained with Modified Neighbor Annealing (MNA) to classify heart cycles into normal and murmur classes. Heart cycles are separated from heart sounds using wavelet transformer. The network inputs are features extracted from individual heart cycles, and two classification outputs. Classification accuracy of the proposed model is compared with five multilayer perceptron trained with Levenberg-Marquardt, Extreme-learning-machine, back-propagation, simulated-annealing, and neighbor-annealing algorithms. It is also compared with a Self-Organizing Map (SOM) ANN. The proposed model is trained and tested using real heart sounds available in the Pascal database to show the applicability of the proposed scheme. Also, a device to record real heart sounds has been developed and used for comparison purposes too. Based on the results of this study, MNA can be used to produce considerable results as a heart cycle classifier. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  14. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  15. Local Mechanisms for Loud Sound-Enhanced Aminoglycoside Entry into Outer Hair Cells

    Directory of Open Access Journals (Sweden)

    Hongzhe eLi

    2015-04-01

    Full Text Available Loud sound exposure exacerbates aminoglycoside ototoxicity, increasing the risk of permanent hearing loss and degrading the quality of life in affected individuals. We previously reported that loud sound exposure induces temporary threshold shifts (TTS and enhances uptake of aminoglycosides, like gentamicin, by cochlear outer hair cells (OHCs. Here, we explore mechanisms by which loud sound exposure and TTS could increase aminoglycoside uptake by OHCs that may underlie this form of ototoxic synergy.Mice were exposed to loud sound levels to induce TTS, and received fluorescently-tagged gentamicin (GTTR for 30 minutes prior to fixation. The degree of TTS was assessed by comparing auditory brainstem responses before and after loud sound exposure. The number of tip links, which gate the GTTR-permeant mechanoelectrical transducer (MET channels, was determined in OHC bundles, with or without exposure to loud sound, using scanning electron microscopy.We found wide-band noise (WBN levels that induce TTS also enhance OHC uptake of GTTR compared to OHCs in control cochleae. In cochlear regions with TTS, the increase in OHC uptake of GTTR was significantly greater than in adjacent pillar cells. In control mice, we identified stereociliary tip links at ~50% of potential positions in OHC bundles. However, the number of OHC tip links was significantly reduced in mice that received WBN at levels capable of inducing TTS.These data suggest that GTTR uptake by OHCs during TTS occurs by increased permeation of surviving, mechanically-gated MET channels, and/or non-MET aminoglycoside-permeant channels activated following loud sound exposure. Loss of tip links would hyperpolarize hair cells and potentially increase drug uptake via aminoglycoside-permeant channels expressed by hair cells. The effect of TTS on aminoglycoside-permeant channel kinetics will shed new light on the mechanisms of loud sound-enhanced aminoglycoside uptake, and consequently on ototoxic

  16. Source Separation of Heartbeat Sounds for Effective E-Auscultation

    Science.gov (United States)

    Geethu, R. S.; Krishnakumar, M.; Pramod, K. V.; George, Sudhish N.

    2016-03-01

    This paper proposes a cost effective solution for improving the effectiveness of e-auscultation. Auscultation is the most difficult skill for a doctor, since it can be acquired only through experience. The heart sound mixtures are captured by placing the four numbers of sensors at appropriate auscultation area in the body. These sound mixtures are separated to its relevant components by a statistical method independent component analysis. The separated heartbeat sounds can be further processed or can be stored for future reference. This idea can be used for making a low cost, easy to use portable instrument which will be beneficial to people living in remote areas and are unable to take the advantage of advanced diagnosis methods.

  17. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  18. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  19. Strut fracture and disc embolization of a Björk-Shiley mitral valve prosthesis: localization of embolized disc by computerized axial tomography.

    Science.gov (United States)

    Larrieu, A J; Puglia, E; Allen, P

    1982-08-01

    The case of a patient who survived strut fracture and embolization of a Björk-Shiley mitral prosthetic disc is presented. Prompt surgical treatment was directly responsible for survival. In addition, computerized axial tomography of the abdomen aided in localizing and retrieving the embolized disc, which was lodged at the origin of the superior mesenteric artery. A review of similar case reports from the literature supports our conclusions that the development of acute heart failure and absent or muffled prosthetic heart sounds in a patient with a Björk-Shiley prosthetic heart valve inserted prior to 1978 should raise the possibility of valve dysfunction and lead to early reoperation.

  20. Million Hearts: Key to Collaboration to Reduce Heart Disease

    Science.gov (United States)

    Brinkman, Patricia

    2016-01-01

    Extension has taught successful classes to address heart disease, yet heart disease remains the number one killer in the United States. The U.S. government's Million Hearts initiative seeks collaboration among colleges, local and state health departments, Extension and other organizations, and medical providers in imparting a consistent message…

  1. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  2. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  3. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  4. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  5. Propagation of Sound in a Bose-Einstein Condensate

    International Nuclear Information System (INIS)

    Andrews, M.R.; Kurn, D.M.; Miesner, H.; Durfee, D.S.; Townsend, C.G.; Inouye, S.; Ketterle, W.

    1997-01-01

    Sound propagation has been studied in a magnetically trapped dilute Bose-Einstein condensate. Localized excitations were induced by suddenly modifying the trapping potential using the optical dipole force of a focused laser beam. The resulting propagation of sound was observed using a novel technique, rapid sequencing of nondestructive phase-contrast images. The speed of sound was determined as a function of density and found to be consistent with Bogoliubov theory. This method may generally be used to observe high-lying modes and perhaps second sound. copyright 1997 The American Physical Society

  6. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  7. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  8. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  9. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Analyzing the electrophysiological effects of local epicardial temperature in experimental studies with isolated hearts

    International Nuclear Information System (INIS)

    Tormos, Alvaro; Millet, José; Guill, Antonio; Chorro, Francisco J; Cánoves, Joaquín; Mainar, Luis; Such, Luis; Alberola, Antonio; Trapero, Isabel; Such-Miquel, Luis

    2008-01-01

    As a result of their modulating effects upon myocardial electrophysiology, both hypo- and hyperthermia can be used to study the mechanisms that generate or sustain cardiac arrhythmias. The present study describes an original electrode developed with thick-film technology and capable of controlling regional temperature variations in the epicardium while simultaneously registering its electrical activity. In this way, it is possible to measure electrophysiological parameters of the heart at different temperatures. The results obtained with this device in a study with isolated and perfused rabbit hearts are reported. An exploration has been made of the effects of local temperature changes upon the electrophysiological parameters implicated in myocardial conduction. Likewise, an analysis has been made of the influence of local temperature upon ventricular fibrillation activation frequency. It is concluded that both regional hypo- and hyperthermia exert reversible and opposite effects upon myocardial refractoriness and conduction velocity in the altered zone. The ventricular activation wavelength determined during constant pacing at 250 ms cycles is not significantly modified, however. During ventricular fibrillation, the changes in the fibrillatory frequency do not seem to be transmitted to normal temperature zones

  11. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    Science.gov (United States)

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were

  12. Perceiving blocks of emotional pictures and sounds:Effects on physiological variables

    Directory of Open Access Journals (Sweden)

    Anne-Marie eBrouwer

    2013-06-01

    Full Text Available Most studies on physiological effects of emotion inducing images and sounds examine stimulus locked variables reflecting a state of at most a few seconds. We here aimed to induce longer lasting emotional states using blocks of repetitive visual, auditory and bimodal stimuli corresponding to specific valence and arousal levels. The duration of these blocks enabled us to reliably measure heart rate variability as a possible indicator of arousal. In addition, heart rate and skin conductance were determined without taking stimulus timing into account. Heart rate was higher for pleasant and low arousal stimuli compared to unpleasant and high arousal stimuli. Heart rate variability and skin conductance increased with arousal. Effects of valence and arousal on cardiovascular measures habituated or remained the same over 2-minute intervals whereas the arousal effect on skin conductance increased. We did not find any effect of stimulus modality. Our results indicate that blocks of images and sounds of specific valence and arousal levels consistently influence different physiological parameters. These parameters need not be stimulus locked. We found no evidence for differences in emotion induction between visual and auditory stimuli, nor did we find bimodal stimuli to be more potent than unimodal stimuli. The latter could be (partly due to the fact that our bimodal stimuli were not optimally congruent.

  13. A RARE CASE OF SINUS OF VALSALVA ANEURYSM PRESENTING WITH TRICUSPID STENOSIS AND RIGHT HEART FAILURE

    Directory of Open Access Journals (Sweden)

    P. V. R. S. Subrahmanya Sarma

    2017-12-01

    Full Text Available PRESENTATION OF CASE A female patient of age 48 years came with the complaints of dyspnoea on exertion, no history of orthopnoea or PND attacks. There is history of easy fatigability and mild abdominal distension, since past 3 months. On clinical examination, she is moderately built and nourished. There was no pallor, cyanosis, clubbing, lymphadenopathy, oedema and icterus. Family history was not significant. She was conscious and coherent. Vitals were within the normal limits. Her BP being 120/76 mmHg. She was found to have an elevated JVP up to angle of the mandibule with a prominent "A" wave, and on palpation, there are no thrills or sounds palpable and on auscultation first heart sound and a normal split second heart sounds were heard with no added sounds or murmurs being heard and the presence of free fluid in the abdomen was confirmed. Hepatomegaly was also noticed. Clinically, she was thought to have right heart failure. Her ECG showed that she was in atrial fibrillation with controlled ventricular rate.

  14. Variations in Local Calcium Signaling in Adjacent Cardiac Myocytes of the Intact Mouse Heart Detected with Two-Dimensional Confocal Microscopy

    Directory of Open Access Journals (Sweden)

    Karin P Hammer

    2015-01-01

    Full Text Available Dyssynchronous local Ca release within individual cardiac myocytes has been linked to cellular contractile dysfunction. Differences in Ca kinetics in adjacent cells may also provide a substrate for inefficient contraction and arrhythmias. In a new approach we quantify variation in local Ca transients between adjacent myocytes in the whole heart.Langendorff-perfused mouse hearts were loaded with Fluo-8 AM to detect Ca and Di-4-ANEPPS to visualize cell membranes. A spinning disc confocal microscope with a fast camera allowed us to record Ca signals within an area of 465 µm by 315 µm with an acquisition speed of 55 fps. Images from multiple transients recorded at steady state were registered to their time point in the cardiac cycle to restore averaged local Ca transients with a higher temporal resolution. Local Ca transients within and between adjacent myocytes were compared with regard to amplitude, time to peak and decay at steady state stimulation (250 ms cycle length.Image registration from multiple sequential Ca transients allowed reconstruction of high temporal resolution (2.4 ±1.3ms local CaT in 2D image sets (N= 4 hearts, n= 8 regions. During steady state stimulation, spatial Ca gradients were homogeneous within cells in both directions and independent of distance between measured points. Variation in CaT amplitudes was similar across the short and the long side of neighboring cells. Variations in TAU and TTP were similar in both directions. Isoproterenol enhanced the CaT but not the overall pattern of spatial heterogeneities.Here we detected and analyzed local Ca signals in intact mouse hearts with high temporal and spatial resolution, taking into account 2D arrangement of the cells. We observed significant differences in the variation of CaT amplitude along the long and short axis of cardiac myocytes. Variations of Ca signals between neighboring cells may contribute to the substrate of cardiac remodeling.

  15. Localização sonora em usuários de aparelhos de amplificação sonora individual Sound localization by hearing aid users

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rodrigues

    2010-06-01

    Full Text Available OBJETIVO: comparar o desempenho, no teste de localização de fontes sonoras, de usuários de aparelhos de amplificação sonora individual (AASI do tipo retroauricular e intracanal, com o desempenho de ouvintes normais, nos planos espaciais horizontal e sagital mediano, para as frequências de 500, 2.000 e 4.500 Hz; e correlacionar os acertos no teste de localização sonora com o tempo de uso dos AASI. MÉTODOS: foram testados oito ouvintes normais e 20 usuários de próteses auditivas, subdivididos em dois grupos. Um formado por 10 indivíduos usuários de próteses auditivas do tipo intracanal e o outro grupo formado por 10 usuários de próteses auditivas do tipo retroauricular. Todos foram submetidos ao teste de localização de fontes sonoras, no qual foram apresentados, aleatoriamente, três tipos de ondas quadradas, com frequências fundamentais em 0,5 kHz, 2 kHz e 4,5 kHz, na intensidade de 70 dBA. RESULTADOS: encontrou-se percentuais de acertos médios de 78,4%, 72,2% e 72,9% para os ouvintes normais, em 0,5 kHz, 2 kHz e 4,5 kHz, respectivamente e 40,1%, 39,4% e 41,7% para os usuários de aparelho de amplificação sonora individual. Quanto aos tipos de aparelhos, os usuários do modelo intracanal acertaram a origem da fonte sonora em 47,2% das vezes e os usuários do modelo retroauricular em 37,4% das vezes. Não foi observada correlação entre o percentual de acertos no teste de localização sonora e o tempo de uso da prótese auditiva. CONCLUSÃO: ouvintes normais localizam as fontes sonoras de maneira mais eficiente que os usuários de aparelho de amplificação sonora individual e, dentre estes, os que utilizam o modelo intracanal obtiveram melhor desempenho. Além disso, o tempo de uso não interferiu no desempenho para localizar a origem das fontes sonoras.PURPOSE: to compare the sound localization performance of hearing aids users, with the performance of normal hearing in the horizontal and sagittal planes, at 0.5, 2 and 4

  16. Late radiation-induced heart disease after radiotherapy. Clinical importance, radiobiological mechanisms and strategies of prevention

    International Nuclear Information System (INIS)

    Andratschke, Nicolaus; Maurer, Jean; Molls, Michael; Trott, Klaus-Ruediger

    2011-01-01

    The clinical importance of radiation-induced heart disease, in particular in post-operative radiotherapy of breast cancer patients, has been recognised only recently. There is general agreement, that a co-ordinated research effort would be needed to explore all the potential strategies of how to reduce the late risk of radiation-induced heart disease in radiotherapy. This approach would be based, on one hand, on a comprehensive understanding of the radiobiological mechanisms of radiation-induced heart disease after radiotherapy which would require large-scale long-term animal experiments with high precision local heart irradiation. On the other hand - in close co-operation with mechanistic in vivo research studies - clinical studies in patients need to determine the influence of dose distribution in the heart on the risk of radiation-induced heart disease. The aim of these clinical studies would be to identify the critical structures within the organ which need to be spared and their radiation sensitivity as well as a potential volume and dose effect. The results of the mechanistic studies might also provide concepts of how to modify the gradual progression of radiation damage in the heart by drugs or biological molecules. The results of the studies in patients would need to also incorporate detailed dosimetric and imaging studies in order to develop early indicators of impending radiation-induced heart disease which would be a pre-condition to develop sound criteria for treatment plan optimisation.

  17. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  18. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  19. Outcome of patients undergoing open heart surgery at the Uganda heart institute, Mulago hospital complex.

    Science.gov (United States)

    Aliku, Twalib O; Lubega, Sulaiman; Lwabi, Peter; Oketcho, Michael; Omagino, John O; Mwambu, Tom

    2014-12-01

    Heart disease is a disabling condition and necessary surgical intervention is often lacking in many developing countries. Training of the superspecialties abroad is largely limited to observation with little or no opportunity for hands on experience. An approach in which open heart surgeries are conducted locally by visiting teams enabling skills transfer to the local team and helps build to build capacity has been adopted at the Uganda Heart Institute (UHI). We reviewed the progress of open heart surgery at the UHI and evaluated the postoperative outcomes and challenges faced in conducting open heart surgery in a developing country. Medical records of patients undergoing open heart surgery at the UHI from October 2007 to June 2012 were reviewed. A total of 124 patients underwent open heart surgery during the study period. The commonest conditions were: venticular septal defects (VSDs) 34.7% (43/124), Atrial septal defects (ASDs) 34.7% (43/124) and tetralogy of fallot (TOF) in 10.5% (13/124). Non governmental organizations (NGOs) funded 96.8% (120/124) of the operations, and in only 4 patients (3.2%) families paid for the surgeries. There was increasing complexity in cases operated upon from predominantly ASDs and VSDs at the beginning to more complex cases like TOFs and TAPVR. The local team independently operated 19 patients (15.3%). Postoperative morbidity was low with arrhythmias, left ventricular dysfunction and re-operations being the commonest seen. Post operative sepsis occurred in only 2 cases (1.6%). The overall mortality rate was 3.2. Open heart surgery though expensive is feasible in a developing country. With increased direct funding from governments and local charities to support open heart surgeries, more cardiac patients access surgical treatment locally.

  20. Heart rate, salivary α-amylase activity, and cooperative behavior in previously naïve children receiving dental local anesthesia.

    Science.gov (United States)

    Arhakis, Aristidis; Menexes, George; Coolidge, Trilby; Kalfas, Sotirios

    2012-01-01

    Psychosomatic indicators, such as heart rate (HR), salivary alpha amylase (sAA) activity, and behavior, can be used to determine stress. This study's aim was to assess the pattern of changes of salivary alpha amylase, heart rate, and cooperative behavior in previously naïve children receiving dental treatment under local anesthesia. Included were 30 children with no prior dental experience who needed 4 or more sessions of dental treatment involving local anesthesia. In each session, sAA, HR, and behavior were assessed before and during the application of local anesthesia and at the end of the treatment. The highest sAA value was always observed at the end of each session; overall, the value was lower in the fourth session. HR always increased during the local anesthesia, and did not vary across sessions. No significant relationship was found between child cooperation and either sAA or HR. In this sample, child cooperation may not be an accurate indicator of stress. Based on salivary alpha amylase activity changes, dental treatment involving local anesthesia in naïve children appeared to be less stressful after 3 sessions.

  1. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  2. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  3. Subcellular localization of the delayed rectifier K(+) channels KCNQ1 and ERG1 in the rat heart

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Borger; Møller, Morten; Knaus, Hans-Günther

    2003-01-01

    In the heart, several K(+) channels are responsible for the repolarization of the cardiac action potential, including transient outward and delayed rectifier K(+) currents. In the present study, the cellular and subcellular localization of the two delayed rectifier K(+) channels, KCNQ1 and ether...

  4. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  5. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  6. A novel protein involved in heart development in Ambystoma mexicanum is localized in endoplasmic reticulum.

    Science.gov (United States)

    Jia, P; Zhang, C; Huang, X P; Poda, M; Akbas, F; Lemanski, S L; Erginel-Unaltuna, N; Lemanski, L F

    2008-11-01

    The discovery of the naturally occurring cardiac non-function (c) animal strain in Ambystoma mexicanum (axolotl) provides a valuable animal model to study cardiomyocyte differentiation. In homozygous mutant animals (c/c), rhythmic contractions of the embryonic heart are absent due to a lack of organized myofibrils. We have previously cloned a partial sequence of a peptide cDNA (N1) from an anterior-endoderm-conditioned-medium RNA library that had been shown to be able to rescue the mutant phenotype. In the current studies we have fully cloned the N1 full length cDNA sequence from the library. N1 protein has been detected in both adult heart and skeletal muscle but not in any other adult tissues. GFP-tagged expression of the N1 protein has revealed localization of the N1 protein in the endoplasmic reticulum (ER). Results from in situ hybridization experiments have confirmed the dramatic decrease of expression of N1 mRNA in mutant (c/c) embryos indicating that the N1 gene is involved in heart development.

  7. A stethoscope with wavelet separation of cardiac and respiratory sounds for real time telemedicine implemented on field-programmable gate array

    Science.gov (United States)

    Castro, Víctor M.; Muñoz, Nestor A.; Salazar, Antonio J.

    2015-01-01

    Auscultation is one of the most utilized physical examination procedures for listening to lung, heart and intestinal sounds during routine consults and emergencies. Heart and lung sounds overlap in the thorax. An algorithm was used to separate them based on the discrete wavelet transform with multi-resolution analysis, which decomposes the signal into approximations and details. The algorithm was implemented in software and in hardware to achieve real-time signal separation. The heart signal was found in detail eight and the lung signal in approximation six. The hardware was used to separate the signals with a delay of 256 ms. Sending wavelet decomposition data - instead of the separated full signa - allows telemedicine applications to function in real time over low-bandwidth communication channels.

  8. Phono-spectrographic analysis of heart murmur in children

    Directory of Open Access Journals (Sweden)

    Angerla Anna

    2007-06-01

    Full Text Available Abstract Background More than 90% of heart murmurs in children are innocent. Frequently the skills of the first examiner are not adequate to differentiate between innocent and pathological murmurs. Our goal was to evaluate the value of a simple and low-cost phonocardiographic recording and analysis system in determining the characteristic features of heart murmurs in children and in distinguishing innocent systolic murmurs from pathological. Methods The system consisting of an electronic stethoscope and a multimedia laptop computer was used for the recording, monitoring and analysis of auscultation findings. The recorded sounds were examined graphically and numerically using combined phono-spectrograms. The data consisted of heart sound recordings from 807 pediatric patients, including 88 normal cases without any murmur, 447 innocent murmurs and 272 pathological murmurs. The phono-spectrographic features of heart murmurs were examined visually and numerically. From this database, 50 innocent vibratory murmurs, 25 innocent ejection murmurs and 50 easily confusable, mildly pathological systolic murmurs were selected to test whether quantitative phono-spectrographic analysis could be used as an accurate screening tool for systolic heart murmurs in children. Results The phono-spectrograms of the most common innocent and pathological murmurs were presented as examples of the whole data set. Typically, innocent murmurs had lower frequencies (below 200 Hz and a frequency spectrum with a more harmonic structure than pathological cases. Quantitative analysis revealed no significant differences in the duration of S1 and S2 or loudness of systolic murmurs between the pathological and physiological systolic murmurs. However, the pathological murmurs included both lower and higher frequencies than the physiological ones (p Conclusion Phono-spectrographic analysis improves the accuracy of primary heart murmur evaluation and educates inexperienced listener

  9. International perception of lung sounds: a comparison of classification across some European borders

    OpenAIRE

    Aviles Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder Andreas; Francis, Nick; Cals, Jochem W L; Andreeva, Elena A; Marques, Alda; Piirila, Paivi; Pasterkamp, Hans; Melbye, Hasse

    2017-01-01

    Source at http://dx.doi.org/10.1136/bmjresp-2017-000250 Introduction: Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role that chest auscultation has in a modern clinical setting. The recording and visual display of lung sounds is a method that is both repeatab...

  10. Music therapy, emotions and the heart: a pilot study.

    Science.gov (United States)

    Raglio, Alfredo; Oasi, Osmano; Gianotti, Marta; Bellandi, Daniele; Manzoni, Veronica; Goulene, Karine; Imbriani, Chiara; Badiale, Marco Stramba

    2012-01-01

    The autonomic nervous system plays an important role in the control of cardiac function. It has been suggested that sound and music may have effects on the autonomic control of the heart inducing emotions, concomitantly with the activation of specific brain areas, i.e. the limbic area, and they may exert potential beneficial effects. This study is a prerequisite and defines a methodology to assess the relation between changes in cardiac physiological parameters such as heart rate, QT interval and their variability and the psychological responses to music therapy sessions. We assessed the cardiac physiological parameters and psychological responses to a music therapy session. ECG Holter recordings were performed before, during and after a music therapy session in 8 healthy individuals. The different behaviors of the music therapist and of the subjects have been analyzed with a specific music therapy assessment (Music Therapy Checklist). After the session mean heart rate decreased (p = 0.05), high frequency of heart rate variability tended to be higher and QTc variability tended to be lower. During music therapy session "affect attunements" have been found in all subjects but one. A significant emotional activation was associated to a higher dynamicity and variations of sound-music interactions. Our results may represent the rational basis for larger studies in diferent clinical conditions.

  11. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  12. Universal design of a microcontroller and IoT system to detect the heart rate

    Science.gov (United States)

    Uwamahoro, Raphael; Mushikiwabeza, Alexie; Minani, Gerard; Mohan Murari, Bhaskar

    2017-11-01

    Heart rate analysis provides vital information of the present condition of the human body. It helps medical professionals in diagnosis of various malfunctions of the body. The limitation of vision impaired and blind people to access medical devices cause a considerable loss of life. In this paper, we intended to develop a heart rate detection system that is usable for people with normal and abnormal vision. The system is based on a non-invasive method of measuring the variation of the tissue blood flow rate by means of a photo transmitter and detector through fingertip known as photoplethysmography (PPG). The signal detected is firstly passed through active low pass filter and then amplified by a two stages high gain amplifier. The amplified signal is feed into the microcontroller to calculate the heart rate and displays the heart beat via sound systems and Liquid Crystal Display (LCD). To distinguish arrhythmia, normal heart rate and abnormal working conditions of the system, recognition is provided in different sounds, LCD readings and Light Emitting Diodes (LED).

  13. Cardiac Auscultation for Noncardiologists: Application in Cardiac Rehabilitation Programs: PART I: PATIENTS AFTER ACUTE CORONARY SYNDROMES AND HEART FAILURE.

    Science.gov (United States)

    Compostella, Leonida; Compostella, Caterina; Russo, Nicola; Setzu, Tiziana; Iliceto, Sabino; Bellotto, Fabio

    2017-09-01

    During outpatient cardiac rehabilitation after an acute coronary syndrome or after an episode of congestive heart failure, a careful, periodic evaluation of patients' clinical and hemodynamic status is essential. Simple and traditional cardiac auscultation could play a role in providing useful prognostic information.Reduced intensity of the first heart sound (S1), especially when associated with prolonged apical impulse and the appearance of added sounds, may help identify left ventricular (LV) dysfunction or conduction disturbances, sometimes associated with transient myocardial ischemia. If both S1 and second heart sound (S2) are reduced in intensity, a pericardial effusion may be suspected, whereas an increased intensity of S2 may indicate increased pulmonary artery pressure. The persistence of a protodiastolic sound (S3) after an acute coronary syndrome is an indicator of severe LV dysfunction and a poor prognosis. In patients with congestive heart failure, the association of an S3 and elevated heart rate may indicate impending decompensation. A presystolic sound (S4) is often associated with S3 in patients with LV failure, although it could also be present in hypertensive patients and in patients with an LV aneurysm. Careful evaluation of apical systolic murmurs could help identifying possible LV dysfunction or mitral valve pathology, and differentiate them from a ruptured papillary muscle or ventricular septal rupture. Friction rubs after an acute myocardial infarction, due to reactive pericarditis or Dressler syndrome, are often associated with a complicated clinical course.During cardiac rehabilitation, periodic cardiac auscultation may provide useful information about the clinical-hemodynamic status of patients and allow timely detection of signs, heralding possible complications in an efficient and low-cost manner.

  14. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  15. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  16. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  17. International perception of lung sounds : a comparison of classification across some European borders

    NARCIS (Netherlands)

    Aviles-Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder A; Francis, Nick; Cals, Jochen W L; Andreeva, Elena A; Marques, Alda; Piirilä, Päivi; Pasterkamp, Hans; Melbye, Hasse

    2017-01-01

    Introduction: Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role

  18. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  19. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  20. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  1. Otite média recorrente e habilidade de localização sonora em pré-escolares Otitis media and sound localization ability in preschool children

    Directory of Open Access Journals (Sweden)

    Aveliny Mantovan Lima-Gregio

    2010-12-01

    Full Text Available OBJETIVO: comparar o desempenho de 40 pré-escolares no teste de localização sonora com as respostas de seus pais para um questionário que investigou a ocorrência de episódios de otite média (OM e os sintomas indicativos de desordens audiológicas e do processamento auditivo. MÉTODOS: após aplicação e análise das respostas do questionário, dois grupos foram formados: GO, com histórico de OM, e GC, sem este histórico. Cada grupo com 20 pré-escolares de ambos os gêneros foi submetido ao teste de localização da fonte sonora em cinco direções (Pereira, 1993. RESULTADOS: a comparação entre GO e GC não mostrou diferença estatisticamente significante (p=1,0000. CONCLUSÃO: as otites recorrentes na primeira infância não influenciaram no desempenho da habilidade de localização sonora dos pré-escolares deste estudo. Embora sejam dois instrumentos baratos e de fácil aplicação, o questionário e o teste de localização não foram suficientes para diferenciar os dois grupos testados.PURPOSE: to compare the sound localization ability of 40 preschool children with their parents' answers. The questionnaire answered by the parents investigated otitis media (OM episodes and symptoms that indicated the audiological and auditory processing disabilities. METHODS: after applying and analyzing the questionnaire's answers, two groups were formed: OG (with OM and CG (control group. Each group with 20 preschool children, of both genders, was submitted to the sound localization test in five directions (according to Pereira, 1993. RESULTS: comparison between OG and CG did not reveal statistically significant difference (p=1.0000. CONCLUSION: OM episodes during first infancy did not influence the sound localization ability in this preschool children study. Although both used evaluation instruments (questionnaire and sound localization test are cheap and easy to apply they are not sufficient to differ both tested groups.

  2. Audibility of individual reflections in a complete sound field, III

    DEFF Research Database (Denmark)

    Bech, Søren

    1996-01-01

    This paper reports on the influence of individual reflections on the auditory localization of a loudspeaker in a small room. The sound field produced by a single loudspeaker positioned in a normal listening room has been simulated using an electroacoustic setup. The setup models the direct sound......-independent absorption coefficients of the room surfaces, and (2) a loudspeaker with directivity according to a standard two-way system and absorption coefficients according to real materials. The results have shown that subjects can distinguish reliably between timbre and localization, that the spectrum level above 2 k...

  3. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  4. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  5. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  6. Immunohistochemical localization of cardio-active neuropeptides in the heart of a living fossil, Nautilus pompilius L. (Cephalopoda, Tetrabranchiata).

    Science.gov (United States)

    Springer, J; Ruth, P; Beuerlein, K; Westermann, B; Schipp, R

    2004-01-01

    Neuropeptides play an important role in modulating the effects of neurotransmitters such as acetylcholine and noradrenaline in the heart and the vascular system of vertebrates and invertebrates. Various neuropeptides, including substance P (SP), vasoactive intestinal polypeptide (VIP) and FMRFamide, have been localized in the brain in cephalopods and the neurosecretory system of the vena cava. Previous studies involving cephalopods have mainly focussed on the modern, coleoid cephalopods, whereas little attention was paid to the living fossil Nautilus. In this study, the distributions of the peptides related to tachykinins (TKs) and the high affinity receptor for the best characterized TK substance P (tachykinin NK-1), VIP, as well as FMRFamide were investigated in the heart of Nautilus pompilius L. by immunohistochemistry. TK-like immunoreactivity (TK-LI) was seen associated to a sub-population of hemocytes, VIP-LI glial cells in larger nerves entering the heart, whereas FMRFamide immunoreactivity was distributed throughout the entire heart, including the semilunar atrioventricular valves. The pattern of FMRFamide immunoreactivity matched that of Bodian silver staining for nervous tissue. The NK-1-LI receptor was located on endothelial cells, which were also positive for endothelial nitric oxide synthase-LI (eNOS). The results indicate that neuropeptides may be involved in the regulation of the Nautilus heart via different mechanisms, (1) by direct interaction with myocardial receptors (FMRFamide), (2) by interacting with the nervus cardiacus (VIP-related peptides) and (3) indirectly by stimulating eNOS in the endothelium throughout the heart (TK-related peptides).

  7. Three-year experience with the Sophono in children with congenital conductive unilateral hearing loss: tolerability, audiometry, and sound localization compared to a bone-anchored hearing aid.

    Science.gov (United States)

    Nelissen, Rik C; Agterberg, Martijn J H; Hol, Myrthe K S; Snik, Ad F M

    2016-10-01

    Bone conduction devices (BCDs) are advocated as an amplification option for patients with congenital conductive unilateral hearing loss (UHL), while other treatment options could also be considered. The current study compared a transcutaneous BCD (Sophono) with a percutaneous BCD (bone-anchored hearing aid, BAHA) in 12 children with congenital conductive UHL. Tolerability, audiometry, and sound localization abilities with both types of BCD were studied retrospectively. The mean follow-up was 3.6 years for the Sophono users (n = 6) and 4.7 years for the BAHA users (n = 6). In each group, two patients had stopped using their BCD. Tolerability was favorable for the Sophono. Aided thresholds with the Sophono were unsatisfactory, as they did not reach under a mean pure tone average of 30 dB HL. Sound localization generally improved with both the Sophono and the BAHA, although localization abilities did not reach the level of normal hearing children. These findings, together with previously reported outcomes, are important to take into account when counseling patients and their caretakers. The selection of a suitable amplification option should always be made deliberately and on individual basis for each patient in this diverse group of children with congenital conductive UHL.

  8. Local heart irradiation of ApoE−/− mice induces microvascular and endocardial damage and accelerates coronary atherosclerosis

    International Nuclear Information System (INIS)

    Gabriels, Karen; Hoving, Saske; Seemann, Ingar; Visser, Nils L.; Gijbels, Marion J.; Pol, Jeffrey F.; Daemen, Mat J.; Stewart, Fiona A.; Heeneman, Sylvia

    2012-01-01

    Background and purpose: Radiotherapy of thoracic and chest-wall tumors increases the long-term risk of radiation-induced heart disease, like a myocardial infarct. Cancer patients commonly have additional risk factors for cardiovascular disease, such as hypercholesterolemia. The goal of this study is to define the interaction of irradiation with such cardiovascular risk factors in radiation-induced damage to the heart and coronary arteries. Material and methods: Hypercholesterolemic and atherosclerosis-prone ApoE −/− mice received local heart irradiation with a single dose of 0, 2, 8 or 16 Gy. Histopathological changes, microvascular damage and functional alterations were assessed after 20 and 40 weeks. Results: Inflammatory cells were significantly increased in the left ventricular myocardium at 20 and 40 weeks after 8 and 16 Gy. Microvascular density decreased at both follow-up time-points after 8 and 16 Gy. Remaining vessels had decreased alkaline phosphatase activity (2–16 Gy) and increased von Willebrand Factor expression (16 Gy), indicative of endothelial cell damage. The endocardium was extensively damaged after 16 Gy, with foam cell accumulations at 20 weeks, and fibrosis and protein leakage at 40 weeks. Despite an accelerated coronary atherosclerotic lesion development at 20 weeks after 16 Gy, gated SPECT and ultrasound measurements showed only minor changes in functional cardiac parameters at 20 weeks. Conclusions: The combination of hypercholesterolemia and local cardiac irradiation induced an inflammatory response, microvascular and endocardial damage, and accelerated the development of coronary atherosclerosis. Despite these pronounced effects, cardiac function of ApoE −/− mice was maintained.

  9. Effects of interaural level differences on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency-dependent shaping of binaural cues such as interaural level...... differences (ILDs) and interaural time differences (ITDs). In rooms, the sound reaching the two ears is further modified by reverberant energy, which leads to increased fluctuations in short-term ILDs and ITDs. In the present study, the effect of ILD fluctuations on the externalization of sound......, for sounds that contain frequencies above about 1 kHz the ILD fluctuations were found to be an essential cue for externalization....

  10. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    a health effect has not been documented. In this context, ANSES recommends: Concerning studies and research: - verifying whether or not there is a possible mechanism modulating the perception of audible sound at intensities of infra-sound similar to those measured from local residents; - studying the effects of the amplitude modulation of the acoustic signal on the noise-related disturbance felt; - studying the assumption that cochlea-vestibular effects may be responsible for pathophysiological effects; - undertaking a survey of residents living near wind farms enabling the identification of an objective signature of a physiological effect. Concerning information for local residents and the monitoring of noise levels: - enhancing information for local residents during the construction of wind farms and participation in public inquiries undertaken in rural areas; - systematically measuring the noise emissions of wind turbines before and after they are brought into service; - setting up, especially in the event of controversy, continuous noise measurement systems around wind farms (based on experience at airports, for example). Lastly, the Agency reiterates that the current regulations state that the distance between a wind turbine and the first home should be evaluated on a case-by-case basis, taking the conditions of wind farms into account. This distance, of at least 500 metres, may be increased further to the results of an impact assessment, in order to comply with the limit values for noise exposure. Current knowledge of the potential health effects of exposure to infra-sounds and low-frequency noise provides no justification for changing the current limit values or for extending the spectrum of noise currently taken into consideration

  11. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  12. Heart rate variability in healthy population

    International Nuclear Information System (INIS)

    Alamgir, M.; Hussain, M.M.

    2010-01-01

    Background: Heart rate variability has been considered as an indicator of autonomic status. Little work has been done on heart rate variability in normal healthy volunteers. We aimed at evolving the reference values of heart rate variability in our healthy population. Methods: Twenty-four hour holter monitoring of 37 healthy individuals was done using Holter ECG recorder 'Life card CF' from 'Reynolds Medical'. Heart rate variability in both time and frequency domains was analysed with 'Reynolds Medical Pathfinder Digital/700'. Results: The heart rate variability in normal healthy volunteers of our population was found in time domain using standard deviation of R-R intervals (SDNN), standard deviation of average NN intervals (SDANN), and Square root of the mean squared differences of successive NN intervals (RMSSD). Variation in heart rate variability indices was observed between local and foreign volunteers and RMSSD was found significantly increased (p<0.05) in local population. Conclusions: The values of heart rate variability (RMSSD) in healthy Pakistani volunteers were found increased compared to the foreign data reflecting parasympathetic dominance in our population. (author)

  13. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  14. Congenital Heart Disease: Causes, Diagnosis, Symptoms, and Treatments.

    Science.gov (United States)

    Sun, RongRong; Liu, Min; Lu, Lei; Zheng, Yi; Zhang, Peiying

    2015-07-01

    The congenital heart disease includes abnormalities in heart structure that occur before birth. Such defects occur in the fetus while it is developing in the uterus during pregnancy. About 500,000 adults have congenital heart disease in USA (WebMD, Congenital heart defects medications, www.WebMD.com/heart-disease/tc/congenital-heart-defects-medications , 2014). 1 in every 100 children has defects in their heart due to genetic or chromosomal abnormalities, such as Down syndrome. The excessive alcohol consumption during pregnancy and use of medications, maternal viral infection, such as Rubella virus, measles (German), in the first trimester of pregnancy, all these are risk factors for congenital heart disease in children, and the risk increases if parent or sibling has a congenital heart defect. These are heart valves defects, atrial and ventricular septa defects, stenosis, the heart muscle abnormalities, and a hole inside wall of the heart which causes defect in blood circulation, heart failure, and eventual death. There are no particular symptoms of congenital heart disease, but shortness of breath and limited ability to do exercise, fatigue, abnormal sound of heart as heart murmur, which is diagnosed by a physician while listening to the heart beats. The echocardiogram or transesophageal echocardiogram, electrocardiogram, chest X-ray, cardiac catheterization, and MRI methods are used to detect congenital heart disease. Several medications are given depending on the severity of this disease, and catheter method and surgery are required for serious cases to repair heart valves or heart transplantation as in endocarditis. For genetic study, first DNA is extracted from blood followed by DNA sequence analysis and any defect in nucleotide sequence of DNA is determined. For congenital heart disease, genes in chromosome 1 show some defects in nucleotide sequence. In this review the causes, diagnosis, symptoms, and treatments of congenital heart disease are described.

  15. SOUND-SPEED INVERSION OF THE SUN USING A NONLOCAL STATISTICAL CONVECTION THEORY

    International Nuclear Information System (INIS)

    Zhang Chunguang; Deng Licai; Xiong Darun; Christensen-Dalsgaard, Jørgen

    2012-01-01

    Helioseismic inversions reveal a major discrepancy in sound speed between the Sun and the standard solar model just below the base of the solar convection zone. We demonstrate that this discrepancy is caused by the inherent shortcomings of the local mixing-length theory adopted in the standard solar model. Using a self-consistent nonlocal convection theory, we construct an envelope model of the Sun for sound-speed inversion. Our solar model has a very smooth transition from the convective envelope to the radiative interior, and the convective energy flux changes sign crossing the boundaries of the convection zone. It shows evident improvement over the standard solar model, with a significant reduction in the discrepancy in sound speed between the Sun and local convection models.

  16. Rolling ball sifting algorithm for the augmented visual inspection of carotid bruit auscultation

    Science.gov (United States)

    Huang, Adam; Lee, Chung-Wei; Liu, Hon-Man

    2016-07-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency (above 200 Hz) sounds mixed with background noise and transmitted low-frequency (below 100 Hz) heart sounds that wax and wane periodically. It is a nontrivial task to extract both bruits and heart sounds with high fidelity for further computer-aided auscultation and diagnosis. In this paper we propose a rolling ball sifting algorithm that is capable to filter signals with a sharper frequency selectivity mechanism in the time domain. By rolling two balls (one above and one below the signal) of a suitable radius, the balls are large enough to roll over bruits and yet small enough to ride on heart sound waveforms. The high-frequency bruits can then be extracted according to a tangibility criterion by using the local extrema touched by the balls. Similarly, the low-frequency heart sounds can be acquired by a larger radius. By visualizing the periodicity information of both the extracted heart sounds and bruits, the proposed visual inspection method can potentially improve carotid bruit diagnosis accuracy.

  17. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  18. Expression and subcellular localization of p70S6 kinase under heart failure

    Directory of Open Access Journals (Sweden)

    Usenko V. S.

    2010-11-01

    Full Text Available The PI3K/PDK/Akt/mTOR/p70S6K signaling pathway is primary associated with the activation of insulin receptors and is important for cardiomyocytes survival. p70S6K is a key regulator of the speed and efficiency of protein biosynthesis within the cell. Recently the pro-apoptotic protein BAD has been identified as a new target for р70S6K1. BAD is inactivated in normal cardiomyocytes by р70S6K1 phosphorylation which prevents the cardiomyocytes apoptosis. Aim. To study possible changes in р70S6K1 expression and/or cellular localization at heart failure progression – in DCM- affected human myocardia and murine hearts with experimental DCM-like pathology. Methods. Western-blot analysis and immunohystochemistry. Results. The substantial decrease in р70S6K1 level was observed at the final stage of pathology progression and in the dynamics of DCM pathogenesis as well. For the first time relocalization of the protein to the connective tissue was shown according to the Western-blot results. Conclusions. The data obtained allow us to understand a possible role of р70S6K1 in the regulation of stress-induced apoptotic signaling in cardiomyocytes.

  19. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    Science.gov (United States)

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  20. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  1. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  2. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    Science.gov (United States)

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  3. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Science.gov (United States)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  4. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Directory of Open Access Journals (Sweden)

    A. Neska

    2018-03-01

    Full Text Available This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding. The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  5. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  6. Epicardium-Derived Heart Repair

    Directory of Open Access Journals (Sweden)

    Anke M. Smits

    2014-04-01

    Full Text Available In the last decade, cell replacement therapy has emerged as a potential approach to treat patients suffering from myocardial infarction (MI. The transplantation or local stimulation of progenitor cells with the ability to form new cardiac tissue provides a novel strategy to overcome the massive loss of myocardium after MI. In this regard the epicardium, the outer layer of the heart, is a tractable local progenitor cell population for therapeutic pursuit. The epicardium has a crucial role in formation of the embryonic heart. After activation and migration into the developing myocardium, epicardial cells differentiate into several cardiac cells types. Additionally, the epicardium provides instructive signals for the growth of the myocardium and coronary angiogenesis. In the adult heart, the epicardium is quiescent, but recent evidence suggests that it becomes reactivated upon damage and recapitulates at least part of its embryonic functions. In this review we provide an update on the current knowledge regarding the contribution of epicardial cells to the adult mammalian heart during the injury response.

  7. Outcome of patients undergoing open heart surgery at the Uganda ...

    African Journals Online (AJOL)

    An approach in which open heart surgeries are conducted locally by visiting teams enabling skills transfer to the local team and helps build to build capacity has been adopted at the Uganda Heart Institute (UHI). Objectives: We reviewed the progress of open heart surgery at the UHI and evaluated the postoperative ...

  8. Sound speeds, cracking and the stability of self-gravitating anisotropic compact objects

    International Nuclear Information System (INIS)

    Abreu, H; Hernandez, H; Nunez, L A

    2007-01-01

    Using the concept of cracking we explore the influence that density fluctuations and local anisotropy have on the stability of local and non-local anisotropic matter configurations in general relativity. This concept, conceived to describe the behavior of a fluid distribution just after its departure from equilibrium, provides an alternative approach to consider the stability of self-gravitating compact objects. We show that potentially unstable regions within a configuration can be identified as a function of the difference of propagations of sound along tangential and radial directions. In fact, it is found that these regions could occur when, at a particular point within the distribution, the tangential speed of sound is greater than the radial one

  9. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  10. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  11. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  12. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  13. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  14. 76 FR 39292 - Special Local Regulations & Safety Zones; Marine Events in Captain of the Port Long Island Sound...

    Science.gov (United States)

    2011-07-06

    ... Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast... and fireworks displays within the Captain of the Port (COTP) Long Island Sound Zone. This action is... Island Sound. DATES: This rule is effective in the CFR on July 6, 2011 through 6 p.m. on October 2, 2011...

  15. International perception of lung sounds: a comparison of classification across some European borders.

    Science.gov (United States)

    Aviles-Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder A; Francis, Nick; Cals, Jochen W L; Andreeva, Elena A; Marques, Alda; Piirilä, Päivi; Pasterkamp, Hans; Melbye, Hasse

    2017-01-01

    Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role that chest auscultation has in a modern clinical setting. The recording and visual display of lung sounds is a method that is both repeatable and feasible to use in large samples, and the aim of this study was to evaluate interobserver agreement using this method. With a microphone in a stethoscope tube, we collected digital recordings of lung sounds from six sites on the chest surface in 20 subjects aged 40 years or older with and without lung and heart diseases. A total of 120 recordings and their spectrograms were independently classified by 28 observers from seven different countries. We employed absolute agreement and kappa coefficients to explore interobserver agreement in classifying crackles and wheezes within and between subgroups of four observers. When evaluating agreement on crackles (inspiratory or expiratory) in each subgroup, observers agreed on between 65% and 87% of the cases. Conger's kappa ranged from 0.20 to 0.58 and four out of seven groups reached a kappa of ≥0.49. In the classification of wheezes, we observed a probability of agreement between 69% and 99.6% and kappa values from 0.09 to 0.97. Four out of seven groups reached a kappa ≥0.62. The kappa values we observed in our study ranged widely but, when addressing its limitations, we find the method of recording and presenting lung sounds with spectrograms sufficient for both clinic and research. Standardisation of terminology across countries would improve international communication on lung auscultation findings.

  16. Cell migration during heart regeneration in zebrafish.

    Science.gov (United States)

    Tahara, Naoyuki; Brush, Michael; Kawakami, Yasuhiko

    2016-07-01

    Zebrafish possess the remarkable ability to regenerate injured hearts as adults, which contrasts the very limited ability in mammals. Although very limited, mammalian hearts do in fact have measurable levels of cardiomyocyte regeneration. Therefore, elucidating mechanisms of zebrafish heart regeneration would provide information of naturally occurring regeneration to potentially apply to mammalian studies, in addition to addressing this biologically interesting phenomenon in itself. Studies over the past 13 years have identified processes and mechanisms of heart regeneration in zebrafish. After heart injury, pre-existing cardiomyocytes dedifferentiate, enter the cell cycle, and repair the injured myocardium. This process requires interaction with epicardial cells, endocardial cells, and vascular endothelial cells. Epicardial cells envelope the heart, while endocardial cells make up the inner lining of the heart. They provide paracrine signals to cardiomyocytes to regenerate the injured myocardium, which is vascularized during heart regeneration. In addition, accumulating results suggest that local migration of these major cardiac cell types have roles in heart regeneration. In this review, we summarize the characteristics of various heart injury methods used in the research community and regeneration of the major cardiac cell types. Then, we discuss local migration of these cardiac cell types and immune cells during heart regeneration. Developmental Dynamics 245:774-787, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Using sound to unmask losses disguised as wins in multiline slot machines.

    Science.gov (United States)

    Dixon, Mike J; Collins, Karen; Harrigan, Kevin A; Graydon, Candice; Fugelsang, Jonathan A

    2015-03-01

    Losses disguised as wins (LDWs) are slot machine outcomes where participants bet on multiple lines and win back less than their wager. Despite losing money, the machine celebrates these outcomes with reinforcing sights and sounds. Here, we sought to show that psychophysically and psychologically, participants treat LDWs as wins, but that we could expose LDWs as losses by using negative sounds as feedback. 157 participants were allocated into one of three conditions: a standard sound condition where LDWs, despite being losses, are paired with winning sights and sounds; a silent condition, where LDWs are paired with silence; and a negative sound condition where LDWs and regular losses are both followed by a negative sound. After viewing a paytable, participants conducted 300 spins on a slot machine simulator while heart rate deceleration (HRD) and skin conductance responses (SCRs) were monitored. Participants were then shown 20 different spin outcomes including LDWs and asked whether they had won or lost on that outcome. Participants then estimated on how many spins (out of 300) they won more than they wagered. SCRs were similar for losses and LDWs (both smaller than actual wins). HRD, however, was steeper for both wins and LDWs, compared to losses. In the standard condition, a majority of participants (mis)categorized LDWs as wins, and significantly overestimated the number of times they actually won. In the negative sound condition, this pattern was reversed; most participants correctly categorized LDWs as losses, and they gave high-fidelity win estimates. We conclude that participants both think and physiologically react to LDWs as though they are wins, a miscategorization that misleads them to think that they are winning more often than they actually are. Sound can be used to effectively prevent this misconception and unmask the disguise of LDWs.

  18. Application of electromagnetic and sound waves in nutritional assessment

    International Nuclear Information System (INIS)

    Heymsfield, S.B.; Rolandelli, R.; Casper, K.; Settle, R.G.; Koruda, M.

    1987-01-01

    Four relatively new techniques that apply electromagnetic or sound waves promise to play a major role in the study of human body composition and in clinical nutritional assessment. Computerized axial tomography, nuclear magnetic resonance, infrared interactance, and ultrasonography provide capabilities for measuring the following: total body and regional fat volume; regional skeletal muscle volume; brain, liver, kidney, heart, spleen, and tumor volume; lean tissue content of triglyceride, iron, and high-energy intermediates; bone density; and cardiac function. Each method is reviewed with regard to basic principles, research and clinical applications, strengths, and limitations.33 references

  19. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  20. Sound localization in noise in hearing-impaired listeners.

    Science.gov (United States)

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  1. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    Science.gov (United States)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  2. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  3. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    Spatial localization of sound is often described as unconscious evaluation of cues given by the interaural time difference (ITD) and the spectral information of the sound that reaches the two ears. Our present knowledge suggests the hypothesis that the ITD roughly determines the cone of the perce...... independently in HRTFs used for binaural synthesis. The ITD seems to be dominant for localization in the horizontal plane even when the spectral information is severely degraded....

  4. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  5. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  6. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2006

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2006-08-01

    In the beginning of summer 2006 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (called also Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated in 2005. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings was 48 but at 8 stations the measurement did not succeed because of strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2006 shows small differences at some sounding sites. (orig.)

  7. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  8. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  9. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  10. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  11. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  12. Digital sound de-localisation as a game mechanic for novel bodily play

    DEFF Research Database (Denmark)

    Tiab, John; Rantakari, Juho; Halse, Mads Laurberg

    2016-01-01

    This paper describes an exertion gameplay mechanic involving player's partial control of their opponent's sound localization abilities. We developed this concept through designing and testing "The Boy and The Wolf" game. In this game, we combined deprivation of sight with a positional disparity...... between player bodily movement and sound. This facilitated intense gameplay supporting player creativity and spectator engagement. We use our observations and analysis of our game to offer a set of lessons learnt for designing engaging bodily play using disparity between sound and movement. Moreover, we...... describe our intended future explorations of this area....

  13. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2007

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2007-09-01

    In the beginning of June 2007 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated after it yearly. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings stations is 48. In 2007 at 8 sounding stations the transmitter and/or receiver sites were changed and the line L11.400 was substituted by line L11.500. Some changes helped but anyway there were 6 stations that could not be measured because of the strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2007 shows small differences at some sounding sites. (orig.)

  14. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  15. Myostatin from the heart: local and systemic actions in cardiac failure and muscle wasting

    Science.gov (United States)

    Breitbart, Astrid; Auger-Messier, Mannix; Molkentin, Jeffery D.

    2011-01-01

    A significant proportion of heart failure patients develop skeletal muscle wasting and cardiac cachexia, which is associated with a very poor prognosis. Recently, myostatin, a cytokine from the transforming growth factor-β (TGF-β) family and a known strong inhibitor of skeletal muscle growth, has been identified as a direct mediator of skeletal muscle atrophy in mice with heart failure. Myostatin is mainly expressed in skeletal muscle, although basal expression is also detectable in heart and adipose tissue. During pathological loading of the heart, the myocardium produces and secretes myostatin into the circulation where it inhibits skeletal muscle growth. Thus, genetic elimination of myostatin from the heart reduces skeletal muscle atrophy in mice with heart failure, whereas transgenic overexpression of myostatin in the heart is capable of inducing muscle wasting. In addition to its endocrine action on skeletal muscle, cardiac myostatin production also modestly inhibits cardiomyocyte growth under certain circumstances, as well as induces cardiac fibrosis and alterations in ventricular function. Interestingly, heart failure patients show elevated myostatin levels in their serum. To therapeutically influence skeletal muscle wasting, direct inhibition of myostatin was shown to positively impact skeletal muscle mass in heart failure, suggesting a promising strategy for the treatment of cardiac cachexia in the future. PMID:21421824

  16. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  17. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  18. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  19. Local Control of Audio Environment: A Review of Methods and Applications

    Directory of Open Access Journals (Sweden)

    Jussi Kuutti

    2014-02-01

    Full Text Available The concept of a local audio environment is to have sound playback locally restricted such that, ideally, adjacent regions of an indoor or outdoor space could exhibit their own individual audio content without interfering with each other. This would enable people to listen to their content of choice without disturbing others next to them, yet, without any headphones to block conversation. In practice, perfect sound containment in free air cannot be attained, but a local audio environment can still be satisfactorily approximated using directional speakers. Directional speakers may be based on regular audible frequencies or they may employ modulated ultrasound. Planar, parabolic, and array form factors are commonly used. The directivity of a speaker improves as its surface area and sound frequency increases, making these the main design factors for directional audio systems. Even directional speakers radiate some sound outside the main beam, and sound can also reflect from objects. Therefore, directional speaker systems perform best when there is enough ambient noise to mask the leaking sound. Possible areas of application for local audio include information and advertisement audio feed in commercial facilities, guiding and narration in museums and exhibitions, office space personalization, control room messaging, rehabilitation environments, and entertainment audio systems.

  20. Nonlocal nonlinear coupling of kinetic sound waves

    Directory of Open Access Journals (Sweden)

    O. Lyubchyk

    2014-11-01

    Full Text Available We study three-wave resonant interactions among kinetic-scale oblique sound waves in the low-frequency range below the ion cyclotron frequency. The nonlinear eigenmode equation is derived in the framework of a two-fluid plasma model. Because of dispersive modifications at small wavelengths perpendicular to the background magnetic field, these waves become a decay-type mode. We found two decay channels, one into co-propagating product waves (forward decay, and another into counter-propagating product waves (reverse decay. All wavenumbers in the forward decay are similar and hence this decay is local in wavenumber space. On the contrary, the reverse decay generates waves with wavenumbers that are much larger than in the original pump waves and is therefore intrinsically nonlocal. In general, the reverse decay is significantly faster than the forward one, suggesting a nonlocal spectral transport induced by oblique sound waves. Even with low-amplitude sound waves the nonlinear interaction rate is larger than the collisionless dissipation rate. Possible applications regarding acoustic waves observed in the solar corona, solar wind, and topside ionosphere are briefly discussed.

  1. Quantification of total mercury in liver and heart tissue of Harbor Seals (Phoca vitulina) from Alaska USA

    International Nuclear Information System (INIS)

    Marino, Kady B.; Hoover-Miller, Anne; Conlon, Suzanne; Prewitt, Jill; O'Shea, Stephen K.

    2011-01-01

    This study quantified the Hg levels in the liver (n=98) and heart (n=43) tissues of Harbor Seals (Phoca vitulina) (n=102) harvested from Prince William Sound and Kodiak Island Alaska. Mercury tissue dry weight (dw) concentrations in the liver ranged from 1.7 to 393 ppm dw, and in the heart from 0.19 to 4.99 ppm dw. Results of this study indicate liver and heart tissues' Hg ppm dw concentrations significantly increase with age. Male Harbor Seals bioaccumulated Hg in both their liver and heart tissues at a significantly faster rate than females. The liver Hg bioaccumulation rates between the harvest locations Kodiak Island and Prince William Sound were not found to be significantly different. On adsorption Hg is transported throughout the Harbor Seal's body with the partition coefficient higher for the liver than the heart. No significant differences in the bio-distribution (liver:heart Hg ppm dw ratios (n=38)) values were found with respect to either age, sex or geographic harvest location. In this study the age at which Hg liver and heart bioaccumulation levels become significantly distinct in male and female Harbor Seals were identified through a Tukey's analysis. Of notably concern to human health was a male Harbor Seal's liver tissue harvested from Kodiak Island region. Mercury accumulation in this sample tissue was determined through a Q-test to be an outlier, having far higher Hg concentrarion (liver 392 Hg ppm dw) than the general population sampled. - Highlights: ► Mercury accumulation in the liver and heart of seals exceed food safety guidelines. ► Accumulation rate is greater in males than females with age. ► Liver mercury accumulation is greater than in the heart tissues. ► Mercury determination by USA EPA Method 7473 using thermal decomposition.

  2. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  3. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  4. In Search of the Golden Age Hip-Hop Sound (1986–1996

    Directory of Open Access Journals (Sweden)

    Ben Duinker

    2017-09-01

    Full Text Available The notion of a musical repertoire's "sound" is frequently evoked in journalism and scholarship, but what parameters comprise such a sound? This question is addressed through a statistically-driven corpus analysis of hip-hop music released during the genre's Golden Age era. The first part of the paper presents a methodology for developing, transcribing, and analyzing a corpus of 100 hip-hop tracks released during the Golden Age. Eight categories of aurally salient musical and production parameters are analyzed: tempo, orchestration and texture, harmony, form, vocal and lyric profiles, global and local production effects, vocal doubling and backing, and loudness and compression. The second part of the paper organizes the analysis data into three trend categories: trends of change (parameters that change over time, trends of prevalence (parameters that remain generally constant across the corpus, and trends of similarity (parameters that are similar from song to song. These trends form a generalized model of the Golden Age hip-hop sound which considers both global (the whole corpus and local (unique songs within the corpus contexts. By operationalizing "sound" as the sum of musical and production parameters, aspects of popular music that are resistant to traditional music-analytical methods can be considered.

  5. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  6. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  7. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  8. Cardiovascular cast model fabrication and casting effectiveness evaluation in fetus with severe congenital heart disease or normal heart.

    Science.gov (United States)

    Wang, Yu; Cao, Hai-yan; Xie, Ming-xing; He, Lin; Han, Wei; Hong, Liu; Peng, Yuan; Hu, Yun-fei; Song, Ben-cai; Wang, Jing; Wang, Bin; Deng, Cheng

    2016-04-01

    To investigate the application and effectiveness of vascular corrosion technique in preparing fetal cardiovascular cast models, 10 normal fetal heart specimens with other congenital disease (control group) and 18 specimens with severe congenital heart disease (case group) from induced abortions were enrolled in this study from March 2013 to June 2015 in our hospital. Cast models were prepared by injecting casting material into vascular lumen to demonstrate real geometries of fetal cardiovascular system. Casting effectiveness was analyzed in terms of local anatomic structures and different anatomical levels (including overall level, atrioventricular and great vascular system, left-sided and right-sided heart), as well as different trimesters of pregnancy. In our study, all specimens were successfully casted. Casting effectiveness analysis of local anatomic structures showed a mean score from 1.90±1.45 to 3.60±0.52, without significant differences between case and control groups in most local anatomic structures except left ventricle, which had a higher score in control group (P=0.027). Inter-group comparison of casting effectiveness in different anatomical levels showed no significant differences between the two groups. Intra-group comparison also revealed undifferentiated casting effectiveness between atrioventricular and great vascular system, or left-sided and right-sided heart in corresponding group. Third-trimester group had a significantly higher perfusion score in great vascular system than second-trimester group (P=0.046), while the other anatomical levels displayed no such difference. Vascular corrosion technique can be successfully used in fabrication of fetal cardiovascular cast model. It is also a reliable method to demonstrate three-dimensional anatomy of severe congenital heart disease and normal heart in fetus.

  9. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  10. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  11. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  12. Boundary stabilization of memory-type thermoelasticity with second sound

    Science.gov (United States)

    Mustafa, Muhammad I.

    2012-08-01

    In this paper, we consider an n-dimensional thermoelastic system of second sound with a viscoelastic damping localized on a part of the boundary. We establish an explicit and general decay rate result that allows a wider class of relaxation functions and generalizes previous results existing in the literature.

  13. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    Science.gov (United States)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.

  14. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  15. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  16. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  17. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  18. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  19. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  20. Mind-wandering and alterations to default mode network connectivity when listening to naturalistic versus artificial sounds.

    Science.gov (United States)

    Gould van Praag, Cassandra D; Garfinkel, Sarah N; Sparasci, Oliver; Mees, Alex; Philippides, Andrew O; Ware, Mark; Ottaviani, Cristina; Critchley, Hugo D

    2017-03-27

    Naturalistic environments have been demonstrated to promote relaxation and wellbeing. We assess opposing theoretical accounts for these effects through investigation of autonomic arousal and alterations of activation and functional connectivity within the default mode network (DMN) of the brain while participants listened to sounds from artificial and natural environments. We found no evidence for increased DMN activity in the naturalistic compared to artificial or control condition, however, seed based functional connectivity showed a shift from anterior to posterior midline functional coupling in the naturalistic condition. These changes were accompanied by an increase in peak high frequency heart rate variability, indicating an increase in parasympathetic activity in the naturalistic condition in line with the Stress Recovery Theory of nature exposure. Changes in heart rate and the peak high frequency were correlated with baseline functional connectivity within the DMN and baseline parasympathetic tone respectively, highlighting the importance of individual neural and autonomic differences in the response to nature exposure. Our findings may help explain reported health benefits of exposure to natural environments, through identification of alterations to autonomic activity and functional coupling within the DMN when listening to naturalistic sounds.

  1. Comparison of sound propagation and perception of three types of backup alarms with regards to worker safety

    Directory of Open Access Journals (Sweden)

    Véronique Vaillancourt

    2013-01-01

    Full Text Available A technology of backup alarms based on the use of a broadband signal has recently gained popularity in many countries. In this study, the performance of this broadband technology is compared to that of a conventional tonal alarm and a multi-tone alarm from a worker-safety standpoint. Field measurements of sound pressure level patterns behind heavy vehicles were performed in real work environments and psychoacoustic measurements (sound detection thresholds, equal loudness, perceived urgency and sound localization were carried out in the laboratory with human subjects. Compared with the conventional tonal alarm, the broadband alarm generates a much more uniform sound field behind vehicles, is easier to localize in space and is judged slighter louder at representative alarm levels. Slight advantages were found with the tonal alarm for sound detection and for perceived urgency at low levels, but these benefits observed in laboratory conditions would not overcome the detrimental effects associated with the large and abrupt variations in sound pressure levels (up to 15-20 dB within short distances observed in the field behind vehicles for this alarm, which are significantly higher than those obtained with the broadband alarm. Performance with the multi-tone alarm generally fell between that of the tonal and broadband alarms on most measures.

  2. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  3. Reirradiation tolerance of the rat heart

    International Nuclear Information System (INIS)

    Wondergem, Jan; Ravels, Frank J.M. van; Reijnart, Ivonne W.C.; Strootman, Erwin G.

    1996-01-01

    Purpose: To investigate the influence of reirradiation on the tolerance of the heart after a previous irradiation treatment. Methods and Materials: Female Wistar rats were locally irradiated to the thorax. Development of cardiac function loss was studied with the ex vivo working rat heart preparation. To compare the retreatment experiments, initial, and reirradiation doses were expressed as the percentage of the extrapolated tolerance dose (ETD). Results: Local heart irradiation with a single dose led to a dose-dependent and progressive decrease in cardiac function. The progressive nature of irradiation-induced heart disease is shown to affect the outcome of the retreatment, depending on both the time interval between subsequent doses and the size of the initial dose. The present data demonstrate that hearts are capable of repairing a large part of the initial dose of 10 Gy within the first 24 h. However, once biological damage as a result of the first treatment is fixed, the heart does not show any long-term recovery. At intervals up to 6 months between an initial treatment with 10 Gy and subsequent reirradiation, the reirradiation tolerance dose slightly decreased from 74% of the ETD ref (at 24-h interval) to 68% of the ETD ref (at 6-month interval). Between 6 and 9 months, reirradiation tolerance dose dropped more even to 43% of the ETD ref . Treatment of the heart with an initial dose of 17.5 Gy, instead of 10 Gy, 6 months prior to reirradiation, also led to a further decrease of the reirradiation tolerance dose ( ref ). Conclusions: The outcome of the present study shows a decreased tolerance of the heart to reirradiation at long time intervals (interval > 6 months). This has clinical implications for the estimation of reirradiation tolerance in patients whose mediastinum has to be reirradiated a long time after a first irradiation course

  4. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  5. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  6. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  7. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  8. A randomized trial of nature scenery and sounds versus urban scenery and sounds to reduce pain in adults undergoing bone marrow aspirate and biopsy.

    Science.gov (United States)

    Lechtzin, Noah; Busse, Anne M; Smith, Michael T; Grossman, Stuart; Nesbit, Suzanne; Diette, Gregory B

    2010-09-01

    Bone marrow aspiration and biopsy (BMAB) is painful when performed with only local anesthetic. Our objective was to determine whether viewing nature scenes and listening to nature sounds can reduce pain during BMAB. This was a randomized, controlled clinical trial. Adult patients undergoing outpatient BMAB with only local anesthetic were assigned to use either a nature scene with accompanying nature sounds, city scene with city sounds, or standard care. The primary outcome was a visual analog scale (0-10) of pain. Prespecified secondary analyses included categorizing pain as mild and moderate to severe and using multiple logistic regression to adjust for potential confounding variables. One hundred and twenty (120) subjects were enrolled: 44 in the Nature arm, 39 in the City arm, and 37 in the Standard Care arm. The mean pain scores, which were the primary outcome, were not significantly different between the three arms. A higher proportion in the Standard Care arm had moderate-to-severe pain (pain rating ≥4) than in the Nature arm (78.4% versus 60.5%), though this was not statistically significant (p = 0.097). This difference was statistically significant after adjusting for differences in the operators who performed the procedures (odds ratio = 3.71, p = 0.02). We confirmed earlier findings showing that BMAB is poorly tolerated. While mean pain scores were not significantly different between the study arms, secondary analyses suggest that viewing a nature scene while listening to nature sounds is a safe, inexpensive method that may reduce pain during BMAB. This approach should be considered to alleviate pain during invasive procedures.

  9. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  10. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  11. [Music, pulse, heart and sport].

    Science.gov (United States)

    Gasenzer, E R; Leischik, R

    2018-02-01

    Music, with its various elements, such as rhythm, sound and melody had the unique ability even in prehistoric, ancient and medieval times to have a special fascination for humans. Nowadays, it is impossible to eliminate music from our daily lives. We are accompanied by music in shopping arcades, on the radio, during sport or leisure time activities and in wellness therapy. Ritualized drumming was used in the medical sense to drive away evil spirits or to undergo holy enlightenment. Today we experience the varied effects of music on all sensory organs and we utilize its impact on cardiovascular and neurological rehabilitation, during invasive cardiovascular procedures or during physical activities, such as training or work. The results of recent studies showed positive effects of music on heart rate and in therapeutic treatment (e. g. music therapy). This article pursues the impact of music on the body and the heart and takes sports medical aspects from the past and the present into consideration; however, not all forms of music and not all types of musical activity are equally suitable and are dependent on the type of intervention, the sports activity or form of movement and also on the underlying disease. This article discusses the influence of music on the body, pulse, on the heart and soul in the past and the present day.

  12. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  13. Heart valve cardiomyocytes of mouse embryos express the serotonin transporter SERT

    International Nuclear Information System (INIS)

    Pavone, Luigi Michele; Spina, Anna; Lo Muto, Roberta; Santoro, Dionea; Mastellone, Vincenzo; Avallone, Luigi

    2008-01-01

    Multiple evidence demonstrate a role for serotonin and its transporter SERT in heart valve development and disease. By utilizing a Cre/loxP system driven by SERT gene expression, we recently demonstrated a regionally restricted distribution of SERT-expressing cells in developing mouse heart. In order to characterize the cell types exhibiting SERT expression within the mouse heart valves at early developmental stages, in this study we performed immunohistochemistry for Islet1 (Isl1) and connexin-43 (Cx-43) on heart sections from SERT Cre/+ ;ROSA26R embryos previously stained with X-gal. We observed the co-localization of LacZ staining with Isl1 labelling in the outflow tract, the right ventricle and the conal region of E11.5 mouse heart. Cx-43 labelled cells co-localized with LacZ stained cells in the forming atrioventricular valves. These results demonstrate the cardiomyocyte phenotype of SERT-expressing cells in heart valves of the developing mouse heart, thus suggesting an active role of SERT in early heart valve development.

  14. Hypoplastic left heart syndrome

    Directory of Open Access Journals (Sweden)

    Thiagarajan Ravi

    2007-05-01

    Full Text Available Abstract Hypoplastic left heart syndrome(HLHS refers to the abnormal development of the left-sided cardiac structures, resulting in obstruction to blood flow from the left ventricular outflow tract. In addition, the syndrome includes underdevelopment of the left ventricle, aorta, and aortic arch, as well as mitral atresia or stenosis. HLHS has been reported to occur in approximately 0.016 to 0.036% of all live births. Newborn infants with the condition generally are born at full term and initially appear healthy. As the arterial duct closes, the systemic perfusion becomes decreased, resulting in hypoxemia, acidosis, and shock. Usually, no heart murmur, or a non-specific heart murmur, may be detected. The second heart sound is loud and single because of aortic atresia. Often the liver is enlarged secondary to congestive heart failure. The embryologic cause of the disease, as in the case of most congenital cardiac defects, is not fully known. The most useful diagnostic modality is the echocardiogram. The syndrome can be diagnosed by fetal echocardiography between 18 and 22 weeks of gestation. Differential diagnosis includes other left-sided obstructive lesions where the systemic circulation is dependent on ductal flow (critical aortic stenosis, coarctation of the aorta, interrupted aortic arch. Children with the syndrome require surgery as neonates, as they have duct-dependent systemic circulation. Currently, there are two major modalities, primary cardiac transplantation or a series of staged functionally univentricular palliations. The treatment chosen is dependent on the preference of the institution, its experience, and also preference. Although survival following initial surgical intervention has improved significantly over the last 20 years, significant mortality and morbidity are present for both surgical strategies. As a result pediatric cardiologists continue to be challenged by discussions with families regarding initial decision

  15. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  16. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  17. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  18. Computer-assisted instruction; MR imaging of congenital heart disease

    International Nuclear Information System (INIS)

    Choi, Young Hi; Yu, Pil Mun; Lee, Sang Hoon; Choe, Yeon Hyeon; Kim, Yang Min

    1996-01-01

    To develop a software program for computer-assisted instruction on MR imaging of congenital heart disease for medical students and residents to achieve repetitive and effective self-learning. We used a film scanner(Scan Maker 35t) and IBM-PC(486 DX-2, 60 MHz) for acquisition and storage of image data. The accessories attached to the main processor were CD-ROM drive(Sony), sound card(Soundblaster-Pro), and speaker. We used software of Adobe Photoshop(v 3.0) and paint shop-pro(v 3.0) for preprocessing image data, and paintbrush from microsoft windows 3.1 for labelling. The language used for programming was visual basic(v 3.0) from microsoft corporation. We developed a software program for computer-assisted instruction on MR imaging of congenital heart disease as an effective educational tool

  19. Mercury in Long Island Sound sediments

    Science.gov (United States)

    Varekamp, J.C.; Buchholtz ten Brink, Marilyn R.; Mecray, E.I.; Kreulen, B.

    2000-01-01

    Mercury (Hg) concentrations were measured in 394 surface and core samples from Long Island Sound (LIS). The surface sediment Hg concentration data show a wide spread, ranging from 600 ppb Hg in westernmost LIS. Part of the observed range is related to variations in the bottom sedimentary environments, with higher Hg concentrations in the muddy depositional areas of central and western LIS. A strong residual trend of higher Hg values to the west remains when the data are normalized to grain size. Relationships between a tracer for sewage effluents (C. perfringens) and Hg concentrations indicate that between 0-50 % of the Hg is derived from sewage sources for most samples from the western and central basins. A higher percentage of sewage-derived Hg is found in samples from the westernmost section of LIS and in some local spots near urban centers. The remainder of the Hg is carried into the Sound with contaminated sediments from the watersheds and a small fraction enters the Sound as in situ atmospheric deposition. The Hg-depth profiles of several cores have well-defined contamination profiles that extend to pre-industrial background values. These data indicate that the Hg levels in the Sound have increased by a factor of 5-6 over the last few centuries, but Hg levels in LIS sediments have declined in modern times by up to 30 %. The concentrations of C. perfringens increased exponentially in the top core sections which had declining Hg concentrations, suggesting a recent decline in Hg fluxes that are unrelated to sewage effluents. The observed spatial and historical trends show Hg fluxes to LIS from sewage effluents, contaminated sediment input from the Connecticut River, point source inputs of strongly contaminated sediment from the Housatonic River, variations in the abundance of Hg carrier phases such as TOC and Fe, and focusing of sediment-bound Hg in association with westward sediment transport within the Sound.

  20. ERDA artificial heart program workshop. Final report, September 1, 1975--August 31, 1976

    International Nuclear Information System (INIS)

    Kantrowitz, A.; Altieri, F.; Beall, A.

    1976-08-01

    The major conclusions of the ERDA Artificial Heart Program Workshop are that the concept of a biologically compatible mechanical device which can totally replace the heart is sound, that such a device is needed as an alternative to cardiac transplantation and that its development is a realistic goal. The major recommendation of the committee is that an ERDA program with primary orientation toward development of a total heart replacement should continue, with assured funding about 50 percent higher than at present, for a minimum of 3 additional years at which time another major review should take place. To achieve better management of the program it is recommended that the present contract effort be reorganized under one prime contractor with responsibility for development and demonstration of the ERDA artificial heart system. The formation of a joint artificial heart advisory committee to improve coordination between ERDA and NHLI is also recommended. The committee suggests future policies and directions which it believes will lead to more effective use of funds available for specific aspects of the program. These include the nuclear heart source, engine, blood pump, biomaterials and overall system reliability. Possible future goals for the program are also proposed

  1. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  2. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  3. Three-dimensional interpretation of TEM soundings

    Science.gov (United States)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  4. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    Directory of Open Access Journals (Sweden)

    Ji-Ho Chang

    2017-03-01

    Full Text Available This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive to room reflections and the amplitude decay than the spatial error, which is likely to agree better with the human perception of source localization.

  5. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  6. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  7. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  8. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  9. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  10. Malignant Tumors Of The Heart

    International Nuclear Information System (INIS)

    Dubrava, J.

    2007-01-01

    Autoptic prevalence of the heart tumors is 0,01 – 0,3 %. 12 – 25 % of them are malignant tumors and 75 – 88 % are benign. Malignancies are more frequently found in the right heart. Metastatic tumors occur 20 – 40-times more frequently than primary neoplasms. Even 94 % of primary malignant tumors are sarcomas. Most frequent of them are angio sarcomas. Heart metastases are only found in extensive dissemination. Highest prevalence of heart metastases is observed in melanoma, followed by malignant germ cell tumors, leukemia, lymphoma, lung cancer. The clinical presentation is due to the combination of heart failure, embolism, arrhythmias, pericardial effusion or tamponade. The symptoms depend on anatomical localization and the tumor size but not on the histological type. Prognosis of the heart malignancies is poor. Untreated patients die within several weeks to 2 years after the diagnosis was determined. Whenever possible the heart tumor should be resected, despite the surgery is usually neither definite nor sufficiently effective therapy. The patients with completely resectable sarcomas have better prognosis (median of survival 12 – 24 months) than the patients with incomplete resection (3 – 10 months). Complete excision is possible in only less than half of the patients. In some patients chemotherapy, radiotherapy, heart transplantation or combination of them prolonged the survival up to 2 years. Despite of this treatment median of the survival is only 1 year. (author)

  11. A review of intelligent systems for heart sound signal analysis.

    Science.gov (United States)

    Nabih-Ali, Mohammed; El-Dahshan, El-Sayed A; Yahia, Ashraf S

    2017-10-01

    Intelligent computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. CAD systems could provide physicians with a suggestion about the diagnostic of heart diseases. The objective of this paper is to review the recent published preprocessing, feature extraction and classification techniques and their state of the art of phonocardiogram (PCG) signal analysis. Published literature reviewed in this paper shows the potential of machine learning techniques as a design tool in PCG CAD systems and reveals that the CAD systems for PCG signal analysis are still an open problem. Related studies are compared to their datasets, feature extraction techniques and the classifiers they used. Current achievements and limitations in developing CAD systems for PCG signal analysis using machine learning techniques are presented and discussed. In the light of this review, a number of future research directions for PCG signal analysis are provided.

  12. Driving the SID chip: Assembly language, composition, and sound design for the C64

    Directory of Open Access Journals (Sweden)

    James Newman

    2017-12-01

    Full Text Available The MOS6581, more commonly known as the Sound Interface Device, or SID chip, was the sonic heart of the Commodore 64 home computer. By considering the chip’s development, specification, uses and creative abuses by composers and programmers, alongside its continuing legacy, this paper argues that, more than any other device, the SID chip is responsible for shaping the sound of videogame music. Compared with the brutal atonality of chips such as Atari’s TIA, the SID chip offers a complex 3-channel synthesizer with dynamic waveform selection, per-channel ADSR envelopes, multi-mode filter, ring and cross modulation. However, while the specification is sophisticated, the exploitation of the vagaries and imperfections of the chip are just as significant to its sonic character. As such, the compositional, sound design and programming techniques developed by 1980s composer-coders like Rob Hubbard and Martin Galway are central in defining the distinctive sound of C64 gameplay. Exploring the affordances of the chip and the distinctive ways they were harnessed, the argument of this paper centers on the inexorable link between the technological and the musical. Crucially, composers like Hubbard et al. developed their own bespoke low-level drivers to interface with the SID chip to create pseudo-polyphony through rapid arpeggiation and channel sharing, drum synthesis through waveform manipulation, portamento, and even sample playback. This paper analyses the indivisibility of sound design, synthesis and composition in the birth of these musical forms and aesthetics, and assesses their impact on what would go on to be defined as chiptunes.

  13. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  14. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  15. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  16. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  17. An Algorithm for the Accurate Localization of Sounds

    National Research Council Canada - National Science Library

    MacDonald, Justin A

    2005-01-01

    .... The algorithm requires no a priori knowledge of the stimuli to be localized. The accuracy of the algorithm was tested using binaural recordings from a pair of microphones mounted in the ear canals of an acoustic mannequin...

  18. A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    DEFF Research Database (Denmark)

    Chang, Ji-ho; Jeong, Cheol-Ho

    2017-01-01

    This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two...... conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due...... to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive...

  19. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    Science.gov (United States)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  20. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    Science.gov (United States)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  1. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  2. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  3. Fourier analysis of heart SPECT slices: from remodelation to function?

    International Nuclear Information System (INIS)

    Zigman, M.; Prpic, H.; Lokner, V.

    1994-01-01

    The aim of this study was to determine character of the spatial distribution of marked erythrocytes in heart chambers, lungs and great blood vessels in relation to function of the left and right heart. Investigation included total of 142 subjects, 28 of which were without subjective and clinical signs of heart disease as well as 56 after myocardial infarction (30 of anterior localization, 26 of inferior infarction), 35 with predominant left heart disease (aortic valve disease, dilatative myocardiopathy, etc.) and 23 with predominant right heart disease (atrial septal defect, mitral valve disease). Radionuclide ventriculography (RNV) at rest, and thorax SPECT were performed in all subjects with 740 MBq Tc-99m after in vivo erythrocyte labelling with pyrophosphate. Ultrasound investigation was performed on all the subjects with heart disease and 87 of them underwent invasive cardiac investigation. RNV analysis revealed scintigraphic data on left and right ventricle: global ejection fraction (GEF), end-systolic volume (ESV), end-diastolic volume (EDV), fast tilling rate (FFR), fast emptying rate (FER) as well as regional wall motion shortening. Reconstruction of 64x64x8 SPECT images resulted in 3x64 slices (transversal, coronal and sagittal slices). Fourier analysis of 20-32 reconstructed slices in all three dimensions gave amplitude image of the intensity distribution of marked erythrocytes in heart chambers lungs and great blood vessels as well as phase display of spatial localization of regional amplitude values. Results of joint ROC curves constructed for detection, localization and character of heart disease in all subjects revealed significant clinical information content of SPECT data. Evaluation of RI retention using amplitude images in 3D provides insight in regional changes of volume, particular for atrial and lung involvement. (author)

  4. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  5. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  6. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  7. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  8. The local expression of adult chicken heart myosins during development. I. The three days embryonic chicken heart

    NARCIS (Netherlands)

    Sanders, E.; Moorman, A. F.; Los, J. A.

    1984-01-01

    Immunofluorescence studies were performed on serial sections of three days embryonic chicken hearts using antibodies specific for adult atrial and ventricular myosin heavy chains respectively. The anti-ventricular myosin serum reacted with the entire myocardium showing a decreasing intensity going

  9. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  10. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  11. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  12. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  13. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  14. Sound source location in cavitating tip vortices

    International Nuclear Information System (INIS)

    Higuchi, H.; Taghavi, R.; Arndt, R.E.A.

    1985-01-01

    Utilizing an array of three hydrophones, individual cavitation bursts in a tip vortex could be located. Theoretically, four hydrophones are necessary. Hence the data from three hydrophones are supplemented with photographic observation of the cavitating tip vortex. The cavitation sound sources are found to be localized to within one base chord length from the hydrofoil tip. This appears to correspond to the region of initial tip vortex roll-up. A more extensive study with a four sensor array is now in progress

  15. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  16. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  17. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    Science.gov (United States)

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  18. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  19. Pericardial effusion and pericardial compartments after open heart surgery

    International Nuclear Information System (INIS)

    Duvernoy, O.; Larsson, S.G.; Persson, K.; Thuren, J.; Wikstroem, G.; Akademiska Sjukhuset, Uppsala; Akademiska Sjukhuset, Uppsala

    1990-01-01

    Thirty-three patients with pericardial effusion after open heart surgery were investigated with computed tomography (CT). Twelve of the 33 patients also underwent echocardiography prior to pericardiocentesis. The effusions were typed according to the results of the CT investigation. Because of postoperative adhesions, typical patterns of localized pericardial effusions were found in 16 patients. The localized compartments were seen on the right and left side of the heart and around the aorta and the pulmonary artery. CT was therefore shown to be of value for selecting the approach for drainage with catheter pericardiocentesis. (orig.)

  20. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  1. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  2. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  3. The diagnostics of a nuclear reactor by the analysis of boiling sound

    International Nuclear Information System (INIS)

    Kudo, Kazuhiko; Tanaka, Yoshihisa; Ohsawa, Takaaki; Ohta, Masao

    1980-01-01

    This paper is described on the basic research concerning the method of detecting abnormality by analyzing boiling sound when the heat transfer to coolant became locally abnormal in a pressurized nuclear reactor. In this study, the power spectra of sound were grasped as a sort of pattern, and it was aimed at to diagnose exactly the state in a reactor by analyzing the change with an electronic computer. As the calculating method, the theory of linear distinction function was applied. The subcritical experimental apparatus was used as a simulated reactor core vessel, and boiling sound was received with a hydrophone, amplified, digitalized and processed with a computer. The power spectra of boiling sound were displayed on an oscilloscope, and the digital values were stored in a micro-computer. The method of calculating treatment of the power spectra stored as the data in the microcomputer is explained. The magnitude of the power spectra was large in low frequency region, and decreased as the frequency became higher. The experimental conditions and the results are described. According to the results, considerably good distinction capability was obtained. By utilizing the power spectra in relatively low frequency region, the detection of boiling sound can be made with considerably high accuracy. (Kako, I.)

  4. The improvement of PWR(OPR-1000) Local Control Pannel

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joo-Youl; Kim, Min-Soo; Kim, Kyung-Min; Lee, Jun-Kou [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The malfunction of feature in NPP could be occurred by physical aging, electrical false signal and natural disaster. The first recognition of malfunction is almost done by alarm system. Due to the importance of alarm system, design basis of alarm system is described in FSAR 18.1.4.20(alarm system design review). Operators can recognize malfunction of feature and importance of alarm in short distance. The sound of alarm is also changed depending on frequency so it contributes recognition of alarm. This system is not helpful in recognition of alarm for filed operators. In this study, the way that FSAR(priority of alarm and color indication) is also applied on local control is suggested. The alarm sound considering field situation, alarm name, status indication in circuit breaker are suggested to improve overall local control panel. These can contribute to safety operation. This paper is made from improvement items of local control panel in the sight of field operator. The research of local panel is necessary to apply these improvements and the collaboration of related department is also needed. In this study, The alarm sound considering field situation, alarm name, status indication in circuit breaker are suggested to improve overall local control panel based on Hanul Unit 6. If the improvement is applied, the qualitative effect of safe operation will be increased, and fatigue of work stress will be lower.

  5. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  6. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  7. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  8. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  9. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  10. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    Science.gov (United States)

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  11. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  12. Initial validation of a healthcare needs scale for young people with congenital heart disease.

    Science.gov (United States)

    Chen, Chi-Wen; Ho, Ciao-Lin; Su, Wen-Jen; Wang, Jou-Kou; Chung, Hung-Tao; Lee, Pi-Chang; Lu, Chun-Wei; Hwang, Be-Tau

    2018-01-01

    To validate the initial psychometric properties of a Healthcare Needs Scale for Youth with Congenital Heart Disease. As the number of patients with congenital heart disease surviving to adulthood increases, the transitional healthcare needs for adolescents and young adults with congenital heart disease require investigation. However, few tools comprehensively identify the healthcare needs of youth with congenital heart disease. A cross-sectional study was employed to examine the psychometric properties of the Healthcare Needs Scale for Youth with Congenital Heart Disease. The sample consisted of 500 patients with congenital heart disease, aged 15-24 years, from paediatric cardiology departments and covered the period from March-August 2015. The patients completed the 25-item Healthcare Needs Scale for Youth with Congenital Heart Disease, the questionnaire on health needs for adolescents and the WHO Quality of Life-BREF. Reliability and construct, concurrent, predictive and known-group validity were examined. The Healthcare Needs Scale for Youth with Congenital Heart Disease includes three dimensions, namely health management, health policy and individual and interpersonal relationships, which consist of 25 items. It demonstrated excellent internal consistency and sound construct, concurrent, predictive and known-group validity. The Healthcare Needs Scale for Youth with Congenital Heart Disease is a psychometrically robust measure of the healthcare needs of youth with congenital heart disease. It has the potential to provide nurses with a means to assess and identify the concerns of youth with congenital heart disease and to help them achieve a successful transition to adult care. © 2017 John Wiley & Sons Ltd.

  13. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  14. Ionospheric Electron Densities at Mars: Comparison of Mars Express Ionospheric Sounding and MAVEN Local Measurement

    Czech Academy of Sciences Publication Activity Database

    Němec, F.; Morgan, D. D.; Fowler, C.M.; Kopf, A.J.; Andersson, L.; Gurnett, D. A.; Andrews, D.J.; Truhlík, Vladimír

    2017-01-01

    Roč. 122, č. 12 (2017), s. 12393-12405 E-ISSN 2169-9402 Institutional support: RVO:68378289 Keywords : Mars * ionosphere * MARSIS * Mars Express * MAVEN * radar sounding Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics OBOR OECD: Astronomy (including astrophysics,space science) http://onlinelibrary.wiley.com/doi/10.1002/2017JA024629/full

  15. Analysis of radiation fields in tomography on diffusion gaseous sound

    International Nuclear Information System (INIS)

    Bekman, I.N.

    1999-01-01

    Perspectives of application of equilibrium and stationary variants of diffusion tomography with radioactive gaseous sounds for spatial reconstruction of heterogeneous media in materials technology were considered. The basic attention were allocated to creation of simple algorithms of detection of sound accumulation on the background of monotonically varying concentration field. Algorithms of transformation of two-dimensional radiation field in three-dimensional distribution of radiation sources were suggested. The methods of analytical elongation of concentration field permitting separation of regional anomalies on the background of local ones and vice verse were discussed. It was shown that both equilibrium and stationary variants of diffusion tomography detect the heterogeneity of testing material, provide reduction of spatial distribution of elements of its structure and give an estimation of relative degree of defectiveness

  16. Migration patterns of post-spawning Pacific herring in a subarctic sound

    Science.gov (United States)

    Bishop, Mary Anne; Eiler, John H.

    2018-01-01

    Understanding the distribution of Pacific herring (Clupea pallasii) can be challenging because spawning, feeding and overwintering may take place in different areas separated by 1000s of kilometers. Along the northern Gulf of Alaska, Pacific herring movements after spring spawning are largely unknown. During the fall and spring, herring have been seen moving from the Gulf of Alaska into Prince William Sound, a large embayment, suggesting that fish spawning in the Sound migrate out into the Gulf of Alaska. We acoustic-tagged 69 adult herring on spawning grounds in Prince William Sound during April 2013 to determine seasonal migratory patterns. We monitored departures from the spawning grounds as well as herring arrivals and movements between the major entrances connecting Prince William Sound and the Gulf of Alaska. Departures of herring from the spawning grounds coincided with cessation of major spawning events in the immediate area. After spawning, 43 of 69 tagged herring (62%) moved to the entrances of Prince William Sound over a span of 104 d, although most fish arrived within 10 d of their departure from the spawning grounds. A large proportion remained in these areas until mid-June, most likely foraging on the seasonal bloom of large, Neocalanus copepods. Pulses of tagged herring detected during September and October at Montague Strait suggest that some herring returned from the Gulf of Alaska. Intermittent detections at Montague Strait and the Port Bainbridge passages from September through early January (when the transmitters expired) indicate that herring schools are highly mobile and are overwintering in this area. The pattern of detections at the entrances to Prince William Sound suggest that some herring remain in the Gulf of Alaska until late winter. The results of this study confirm the connectivity between local herring stocks in Prince William Sound and the Gulf of Alaska.

  17. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  18. The idea of corporeity analyzed from heart transplanted patients

    Directory of Open Access Journals (Sweden)

    Ana Mª Palmar Santos

    2008-09-01

    Full Text Available Introduction: Heart transplant in Spain is a frequent and raising technique with big personal, social and financial impact to those who get involved. However, the corporeity analysis, although it is a key constituent element in the integral definition of the human being, is poorly approached in the process.Objective: This work seeks to analyse the heart transplanted patient’s own corporeity perception through himself/herself as well as through his/her closest relatives.Method: We will approach the study from the phenomenological paradigm consisting of fully describing the lived experiences as well as the consequent perceptions in order to obtain a holistic and deep knowledge of reality. So that we will make a descriptive research work with qualitative approach. Sound recorded open interviews will be carried out to fellows who had been heart transplanted in Transplants Unit of Puerta de Hierro Hospital in Madrid. Initial informants’ selection criteria will be:1. Older than eighteen years old patients who had been heart transplanted within the last two years.2. Interviews to relatives who normally live together with the transplanted patients. Individual and social perception o corporeity from each subject will be analysed as well as that perception in function of genre. Data will be obtained from the interviews generated information and they will be analysed through the qualitative technique called “speech analysis”.

  19. Three integrated photovoltaic/sound barrier power plants. Construction and operational experience

    International Nuclear Information System (INIS)

    Nordmann, T.; Froelich, A.; Clavadetscher, L.

    2002-01-01

    After an international ideas competition by TNC Switzerland and Germany in 1996, six companies where given the opportunity to construct a prototype of their newly developed integrated PV-sound barrier concepts. The main goal was to develop highly integrated concepts, allowing the reduction of PV sound barrier systems costs, as well as the demonstration of specific concepts for different noise situations. This project is strongly correlated with a German project. Three of the concepts of the competition are demonstrated along a highway near Munich, constructed in 1997. The three Swiss installations had to be constructed at different locations, reflecting three typical situations for sound barriers. The first Swiss installation was the world first Bi-facial PV-sound barrier. It was built on a highway bridge at Wallisellen-Aubrugg in 1997. The operational experience of the installation is positive. But due to the different efficiencies of the two cell sides, its specific yield lies somewhat behind a conventional PV installation. The second Swiss plant was finished in autumn 1998. The 'zig-zag' construction is situated along the railway line at Wallisellen in a densely inhabited area with some local shadowing. Its performance and its specific yield is comparatively low due to a combination of several reasons (geometry of the concept, inverter, high module temperature, local shadows). The third installation was constructed along the motor way A1 at Bruettisellen in 1999. Its vertical panels are equipped with amorphous modules. The report show, that the performance of the system is reasonable, but the mechanical construction has to be improved. A small trial field with cells directly laminated onto the steel panel, also installed at Bruettisellen, could be the key development for this concept. This final report includes the evaluation and comparison of the monitored data in the past 24 months of operation. (author)

  20. In Conversation: David Brooks on Water Scarcity and Local-level ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-11-26

    Nov 26, 2010 ... While sound water management requires action from all levels, ... Local management is certainly an essential component in managing the world's water crisis. ... case studies that show the promise of local water management.

  1. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  2. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  3. [Considerations on local-regional anesthesia for ambulatory tooth extractions in patients with heart disease].

    Science.gov (United States)

    Debernardi, G; Borgogna, E

    1975-01-01

    Ambulatory dental extraction was performed on 150 patients with various forms of heart disease. No serious complications were noted with an anaesthetic without vasoconstriction (plain 3% carbocaine). The prior history was carefully studied and pressure values were determined. It is felt that heart disease does not form an absolute contraindication to ambulatory dental extraction.

  4. Gefinex 400S (Sampo) EM-soundings at Olkiluoto 2008

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2008-09-01

    In the beginning of June 2008 Geological Survey of Finland (GTK) carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The same soundings sites were first time measured and marked in 2004 and has been repeated after it yearly in the same season. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. Because of the strong electromagnetic noise all planned sites (48) could not be measured. In 2008 the measurements were performed at the sites that were successful in 2007 (43 soundings). The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the signal/noise also with long coil separations and the repeatability of the results is reasonably good. However, most suitable for monitoring purposes are the sites without strong surficial 3D effects. Comparison of the results of 2004 to 2008 surveys shows differences on some ARD (Apparent resistivity-depth) curves. Those are mainly results of the modified man-made structures. The effects of changes in groundwater conditions are obviously slight. (orig.)

  5. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  6. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  7. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  8. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  9. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  10. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  11. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  12. Investigation of fourth sound propagation in HeII in the presence of superflow

    International Nuclear Information System (INIS)

    Andrei, Y.E.

    1980-01-01

    The temperature dependence of a superflow-induced downshift of the fourth sound velocity in HeII confined in various restrictive media was measured. We found that the magnitude of the downshift strongly depends on the restrictive medium, whereas the temperature dependence is universal. The results are interpreted in terms of local superflow velocities approaching the Landau critical velocity. This model provides and understanding of the nature of the downshift and correctly predicts temperature dependence. The results show that the Landau excitation model, even when used at high velocities, where interactions between elementary excitations are substantial, hield good agreement with experiment when a first order correction is introduced to account for these interactions. In a separate series of experiments, fourth sound-like propagation in HeII in a grafoil-filled resonator was observed. The sound velocity was found to be more than an order of magnitude smaller than that of ordinary fourth sound. This significant reduction is explained in terms of a model in which the pore structure in grafoil is pictured as an ensemble of coupled Helmholz resonators

  13. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  14. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  15. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  16. "The Heart Truth:" Using the Power of Branding and Social Marketing to Increase Awareness of Heart Disease in Women.

    Science.gov (United States)

    Long, Terry; Taubenheim, Ann; Wayman, Jennifer; Temple, Sarah; Ruoff, Beth

    2008-03-01

    In September 2002, the National Heart, Lung, and Blood Institute launched The Heart Truth, the first federally-sponsored national campaign aimed at increasing awareness among women about their risk of heart disease. A traditional social marketing approach, including an extensive formative research phase, was used to plan, implement, and evaluate the campaign. With the creation of the Red Dress as the national symbol for women and heart disease awareness, the campaign integrated a branding strategy into its social marketing framework. The aim was to develop and promote a women's heart disease brand that would create a strong emotional connection with women. The Red Dress brand has had a powerful appeal to a wide diversity of women and has given momentum to the campaign's three-part implementation strategy of partnership development, media relations, and community action. In addition to generating its own substantial programming, The Heart Truth became a catalyst for a host of other national and local educational initiatives, both large and small. By the campaign's fifth anniversary, surveys showed that women were increasingly aware of heart disease as their leading cause of death and that the rise in awareness was associated with increased action to reduce heart disease risk.

  17. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  18. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    Science.gov (United States)

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  19. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  20. Gefinex 400S (SAMPO) EM-soundings at Olkiluoto 2009

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.; Korhonen, K.

    2009-09-01

    In the beginning of June 2009 Geological Survey of Finland (GTK) carried out electromagnetic (EM) frequency soundings with Gefinex 400S equipment (Sampo) in the vicinity of ONKALO at the Olkiluoto site investigation area. The EM-monitoring sounding program started in 2004 and has been repeated since yearly in the same season. The aim of the study is to monitor the variations of the groundwater properties down to 500 m depth by the changes of the electric conductivity of the earth at ONKALO and repository area. The original measurement grid was based on two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The receiver and transmitter sites are marked with stakes and the profiles were measured using 200, 500, and 800 m coil separations. The measurement program was revised in 2007 and then again in 2009. Now 15 noisy soundings were removed from the program and 3 new points were selected from the area to the east from ONKALO. The new receiver/transmitter sites, called ABC-points were marked with stakes and the points were measured using transmitter-receiver separations 200, 400 and 800 meters. In 2009 the new EM-Sampo monitoring program included 28+9 soundings. The numerous power lines and cables in the area generate local disturbances on the sounding curves, but the SN (signal to noise) ratio and the repeatability of the results is reasonably good even with long coil separations. However, most suitable for monitoring purposes are the sites without strong shallow 3D effects. Comparison of the new results to old 2004-2008 surveys shows differences on some ARD (apparent resistivity-depth) curves. Those are mainly results of the modified shallow structures. The changes in groundwater conditions based on the monitoring results seem insignificant. (orig.)

  1. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  2. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  3. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  4. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  5. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  6. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  7. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  8. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  9. Very low sound velocities in iron-rich (Mg,Fe)O: Implications for the core-mantle boundary region

    International Nuclear Information System (INIS)

    Wicks, J.K.; Jackson, J.M.; Sturhahn, W.

    2010-01-01

    The sound velocities of (Mg .16 Fe .84 )O have been measured to 121 GPa at ambient temperature using nuclear resonant inelastic x-ray scattering. The effect of electronic environment of the iron sites on the sound velocities were tracked in situ using synchrotron Moessbauer spectroscopy. We found the sound velocities of (Mg .16 Fe .84 )O to be much lower than those in other presumed mantle phases at similar conditions, most notably at very high pressures. Conservative estimates of the effect of temperature and dilution on aggregate sound velocities show that only a small amount of iron-rich (Mg,Fe)O can greatly reduce the average sound velocity of an assemblage. We propose that iron-rich (Mg,Fe)O be a source of ultra-low velocity zones. Other properties of this phase, such as enhanced density and dynamic stability, strongly support the presence of iron-rich (Mg,Fe)O in localized patches above the core-mantle boundary.

  10. Valvular Heart Disease in Heart Failure

    Directory of Open Access Journals (Sweden)

    Giuseppe MC Rosano

    2017-01-01

    Full Text Available Structural valvular heart disease may be the cause of heart failure or may worsen the clinical status of patients with heart failure. Heart failure may also develop in patients treated with valve surgery. Patients with heart failure with valvular heart disease are at increased risk of events including sudden cardiac death. Before considering intervention (surgical or percutaneous all patients should receive appropriate medical and device therapy taking into account that vasodilators must be used with caution in patients with severe aortic stenosis. Numerous percutaneous and/or hybrid procedures have been introduced in the past few years and they are changing the management of valvular heart disease. In patients with heart failure and valvular heart disease, either primary or functional, the whole process of decision-making should be staged through a comprehensive evaluation of the risk– benefit ratio of different treatment strategies and should be made by a multidisciplinary ‘heart team’ with a particular expertise in valvular heart disease. The heart team should include heart failure cardiologists, cardiac surgeons/structural valve interventionists, imaging specialists, anaesthetists, geriatricians and intensive care specialists. This article will review recent developments and distill practical guidance in the management of this important heart failure co-morbidity.

  11. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  12. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  13. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  14. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  15. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  16. Laminar differences in response to simple and spectro-temporally complex sounds in the primary auditory cortex of ketamine-anesthetized gerbils.

    Directory of Open Access Journals (Sweden)

    Markus K Schaefer

    Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.

  17. Exercise Benefits Coronary Heart Disease.

    Science.gov (United States)

    Wang, Lei; Ai, Dongmei; Zhang, Ning

    2017-01-01

    Coronary heart disease (CHD) is a group of diseases that include: no symptoms, angina, myocardial infarction, ischemia cardiomyopathy and sudden cardiac death. And it results from multiple risks factors consisting of invariable factors (e.g. age, gender, etc.) and variable factors (e.g. dyslipidemia, hypertension, diabetes, smoking, etc.). Meanwhile, CHD could cause impact not only localized in the heart, but also on pulmonary function, whole-body skeletal muscle function, activity ability, psychological status, etc. Nowadays, CHD has been the leading cause of death in the world. However, many clinical researches showed that exercise training plays an important role in cardiac rehabilitation and can bring a lot of benefits for CHD patients.

  18. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    Science.gov (United States)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  19. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  20. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  1. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  2. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  3. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  4. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  5. Identification, purification, and localization of tissue kallikrein in rat heart.

    OpenAIRE

    Xiong, W; Chen, L M; Woodley-Miller, C; Simson, J A; Chao, J

    1990-01-01

    A tissue kallikrein has been isolated from rat heart extracts by DEAE-Sepharose and aprotinin-affinity column chromatography. The purified cardiac enzyme has both N-tosyl-L-arginine methyl ester esterolytic and kinin-releasing activities, and displays parallelism with standard curves in a kallikrein radioimmunoassay, indicating it to have immunological identity with tissue kallikrein. The enzyme is inhibited by aprotinin, antipain, leupeptin and by high concentrations of soybean trypsin inhib...

  6. Heart Health - Brave Heart

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health Brave Heart Past Issues / Winter 2009 Table of Contents For ... you can have a good life after a heart attack." Lifestyle Changes Surviving—and thriving—after such ...

  7. Efficient ECG Signal Compression Using Adaptive Heart Model

    National Research Council Canada - National Science Library

    Szilagyi, S

    2001-01-01

    This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...

  8. Frequency of Congenital Heart Diseases in Prelingual Sensory-Neural Deaf Children

    Directory of Open Access Journals (Sweden)

    Masoud Motasaddi Zarandy

    2016-03-01

    Full Text Available Introduction: Hearing impairment is the most frequent sensorial congenital defect in newborns and has increased to 2–4 cases per 1,000 live births. Sensory-neural hearing loss (SNHL accounts for more than 90% of all hearing loss. This disorder is associated with other congenital disorders such as renal, skeletal, ocular, and cardiac disorders. Given that congenital heart diseases are life-threatening, we decided to study the frequency of congenital heart diseases in children with congenital sensory-neural deafness.  Materials and Methods: All children who had undergone cochlear implantation surgery due to SNHL and who had attended our hospital for speech therapy during 2008–2011 were evaluated by Doppler echocardiography.  Results: Thirty-one children (15 boys and 16 girls with a mean age of 55.70 months were examined, and underwent electrocardiography (ECG and echocardiography. None of the children had any signs of heart problems in their medical records. Most of their heart examinations were normal, one patient had expiratory wheeze, four (12% had mid-systolic click, and four (12% had an intensified S1 sound. In echocardiography, 15 children (46% had mitral valve prolapse (MVP and two (6% had minimal mitral regurgitation (MR. Mean ejection fraction (EF was 69% and the mean fractional shortening (FS was 38%.  Conclusion:  This study indicates the need for echocardiography and heart examinations in children with SNHL.

  9. A Comparative Study on Fetal Heart Rates Estimated from Fetal Phonography and Cardiotocography

    Directory of Open Access Journals (Sweden)

    Emad A. Ibrahim

    2017-10-01

    Full Text Available The aim of this study is to investigate that fetal heart rates (fHR extracted from fetal phonocardiography (fPCG could convey similar information of fHR from cardiotocography (CTG. Four-channel fPCG sensors made of low cost (<$1 ceramic piezo vibration sensor within 3D-printed casings were used to collect abdominal phonogram signals from 20 pregnant mothers (>34 weeks of gestation. A novel multi-lag covariance matrix-based eigenvalue decomposition technique was used to separate maternal breathing, fetal heart sounds (fHS and maternal heart sounds (mHS from abdominal phonogram signals. Prior to the fHR estimation, the fPCG signals were denoised using a multi-resolution wavelet-based filter. The proposed source separation technique was first tested in separating sources from synthetically mixed signals and then on raw abdominal phonogram signals. fHR signals extracted from fPCG signals were validated using simultaneous recorded CTG-based fHR recordings.The experimental results have shown that the fHR derived from the acquired fPCG can be used to detect periods of acceleration and deceleration, which are critical indication of the fetus' well-being. Moreover, a comparative analysis demonstrated that fHRs from CTG and fPCG signals were in good agreement (Bland Altman plot has mean = −0.21 BPM and ±2 SD = ±3 with statistical significance (p < 0.001 and Spearman correlation coefficient ρ = 0.95. The study findings show that fHR estimated from fPCG could be a reliable substitute for fHR from the CTG, opening up the possibility of a low cost monitoring tool for fetal well-being.

  10. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  11. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  12. Seafloor environments in the Long Island Sound estuarine system

    Science.gov (United States)

    Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.

    1999-01-01

    broad areas of the basin floor in the western part of the Sound. The regional distribution of seafloor environments reflects fundamental differences in marine-geologic conditions between the eastern and western parts of the Sound. In the funnel-shaped eastern part, a gradient of strong tidal currents coupled with the net nontidal (estuarine) bottom drift produce a westward progression of environments ranging from erosion or nondeposition at the narrow entrance to the Sound, through an extensive area of bedload transport, to a peripheral zone of sediment sorting. In the generally broader western part of the Sound, a weak tidal-current regime combined with the production of particle aggregates by biologic or chemical processes, cause large areas of deposition that are locally interrupted by a patchy distribution of various other environments where the bottom currents are enhanced by and interact with the seafloor topography.

  13. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  14. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  15. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Technetium SPECT agents for imaging heart and brain

    International Nuclear Information System (INIS)

    Linder, K.E.

    1990-01-01

    One major goal of radiopharmaceutical research has been the development of technetium-based perfusion tracers for SPECT imaging of the heart and brain. The recent clinical introduction of the technetium complexes HM-PAO, ECD and DMG-2MP for brain imaging, and of CDO-MEB and MIBI for heart imaging promises to revolutionize the field of nuclear medicine. All of these agents appear to localize in the target tissue in proportion to blood flow, but their mechanisms of localization and/or retention may differ quite widely. In this talk, a survey of the new technetium SPECT agents will be presented. The inorganic and biological chemistry of these complexes, mechanisms of uptake and retention, QSAR studies, and potential clinical applications are discussed

  17. Speed of sound in hadronic matter using non-extensive statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Jean Cleymans

    2015-01-01

    The evolution of the dense matter formed in high energy hadronic and nuclear collisions is controlled by the initial energy density and temperature. The expansion of the system is due to the very high initial pressure with lowering of temperature and energy density. The pressure (P) and energy density (ϵ) are related through speed of sound (c 2 s ) under the condition of local thermal equilibrium. The speed of sound plays a crucial role in hydrodynamical expansion of the dense matter created and the critical behaviour of the system evolving from deconfined Quark Gluon Phase (QGP) to confined hadronic phase. There have been several experimental and theoretical studies in this direction. The non-extensive Tsallis statistics gives better description of the transverse momentum spectra of the produced particles created in high energy p + p (p¯) and e + + e - collisions

  18. Overlapping and differential localization of Bmp-2, Bmp-4, Msx-2 and apoptosis in the endocardial cushion and adjacent tissues of the developing mouse heart.

    Science.gov (United States)

    Abdelwahid, E; Rice, D; Pelliniemi, L J; Jokinen, E

    2001-07-01

    The bone morphogenetic proteins BMP-2 and BMP-4 and the homeobox gene MSX-2 are required for normal development of many embryonic tissues. To elucidate their possible roles during the remodeling of the tubular heart into a fully septated four-chambered heart, we have localized the mRNA of Bmp-2, Bmp-4, Msx-2 and apoptotic cells in the developing mouse heart from embryonic day (E)11 to E17. mRNA was localized by in situ hybridization, and apoptotic cells by TUNEL (TDT-mediated dUTP-biotin nick end-labeling) as well as by transmission electron microscopy. By analyzing adjacent serial sections, we demonstrated that the expression of Msx-2 and Bmp-2 strikingly overlapped in the atrioventricular canal myocardium, in the atrioventricular junctional myocardium, and in the maturing myocardium of the atrioventricular valves. Bmp-4 was expressed in the outflow tract myocardium and in the endocardial cushion of the outflow tract ridges from E12 to E14. Msx-2 appeared in the mesenchyme of the atrioventricular endocardial cushion from E11 to E14, while Bmp-2 and Bmp-4 were detected between E11 and E14. Apoptotic cells were also detected in the mesenchyme of the endocardial cushion between E12 and E14. Our results suggest that BMP-2 and MSX-2 are tightly linked to the formation of the atrioventricular junction and valves and that BMP-4 is involved in the development of the outflow tract myocardium and of the endocardial cushion. In addition, BMP-2, BMP-4 and MSX-2 and apoptosis seem to be associated with differentiation of the endocardial cushion.

  19. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  20. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  1. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  2. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  3. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  4. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  5. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  6. “The Heart Truth:” Using the Power of Branding and Social Marketing to Increase Awareness of Heart Disease in Women

    Science.gov (United States)

    Long, Terry; Taubenheim, Ann; Wayman, Jennifer; Temple, Sarah; Ruoff, Beth

    2008-01-01

    In September 2002, the National Heart, Lung, and Blood Institute launched The Heart Truth, the first federally-sponsored national campaign aimed at increasing awareness among women about their risk of heart disease. A traditional social marketing approach, including an extensive formative research phase, was used to plan, implement, and evaluate the campaign. With the creation of the Red Dress as the national symbol for women and heart disease awareness, the campaign integrated a branding strategy into its social marketing framework. The aim was to develop and promote a women's heart disease brand that would create a strong emotional connection with women. The Red Dress brand has had a powerful appeal to a wide diversity of women and has given momentum to the campaign's three-part implementation strategy of partnership development, media relations, and community action. In addition to generating its own substantial programming, The Heart Truth became a catalyst for a host of other national and local educational initiatives, both large and small. By the campaign's fifth anniversary, surveys showed that women were increasingly aware of heart disease as their leading cause of death and that the rise in awareness was associated with increased action to reduce heart disease risk. PMID:19122892

  7. Reorganization of the brain and heart rhythm during autogenic meditation.

    Science.gov (United States)

    Kim, Dae-Keun; Rhee, Jyoo-Hi; Kang, Seung Wan

    2014-01-13

    The underlying changes in heart coherence that are associated with reported EEG changes in response to meditation have been explored. We measured EEG and heart rate variability (HRV) before and during autogenic meditation. Fourteen subjects participated in the study. Heart coherence scores were significantly increased during meditation compared to the baseline. We found near significant decrease in high beta absolute power, increase in alpha relative power and significant increases in lower (alpha) and higher (above beta) band coherence during 3~min epochs of heart coherent meditation compared to 3~min epochs of heart non-coherence at baseline. The coherence and relative power increase in alpha band and absolute power decrease in high beta band could reflect relaxation state during the heart coherent meditation. The coherence increase in the higher (above beta) band could reflect cortico-cortical local integration and thereby affect cognitive reorganization, simultaneously with relaxation. Further research is still needed for a confirmation of heart coherence as a simple window for the meditative state.

  8. Reorganization of the Brain and Heart Rhythm During Autogenic Meditation

    Directory of Open Access Journals (Sweden)

    Dae-Keun eKim

    2014-01-01

    Full Text Available The underlying changes in heart coherence that are associated with reported EEG changes in response to meditation have been explored. We measured EEG and heart rate variability (HRV before and during autogenic meditation. Fourteen subjects participated in the study. Heart coherence scores were significantly increased during meditation compared to the baseline. We found near significant decrease in high beta absolute power, increase in alpha relative power and significant increases in lower(alpha and higher(above beta band coherence during 3 minute epochs of heart coherent meditation compared to 3 minute epochs of heart noncoherence at baseline. The coherence and relative power increase in alpha band and absolute power decrease in high beta band could reflect relaxation state during the heart coherent meditation. The coherence increase in the higher(above beta band could reflect cortico-cortical local integration and thereby affect cognitive reorganization, simultaneously with relaxation. Further research is still needed for a confirmation of heart coherence as a simple window for the meditative state.

  9. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  10. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  11. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  12. Deltas, freshwater discharge, and waves along the Young Sound, NE Greenland

    DEFF Research Database (Denmark)

    Kroon, Aart; Abermann, Jakob; Bendixen, Mette

    2017-01-01

    , and bathymetry), fluvial discharges and associated sediment load, and processes by waves and currents. Main factors steering the Arctic fluvial discharges into the Young Sound are the snow and ice melt and precipitation in the catchment, and extreme events like glacier lake outburst floods (GLOFs). Waves......A wide range of delta morphologies occurs along the fringes of the Young Sound in Northeast Greenland due to spatial heterogeneity of delta regimes. In general, the delta regime is related to catchment and basin characteristics (geology, topography, drainage pattern, sediment availability...... are subordinate and only rework fringes of the delta plain forming sandy bars if the exposure and fetch are optimal. Spatial gradients and variability in driving forces (snow and precipitation) and catchment characteristics (amount of glacier coverage, sediment characteristics) as well as the strong and local...

  13. Investigation of local heterogeneity of hbO2 and hb in working dog heart in situ under isovolemic hemodilution and critical coronary stenosis

    Science.gov (United States)

    Krug, Alfons; Kessler, Manfred D.; Khuri, Raja; Lust, Robert; Chitwood, Randolph

    1996-12-01

    A tissue spectrophotometer (EMPHO II) working with 70 micrometer micro lightguide sensors enables recording of spectra in the visible wavelength range (500 - 630 nm). During an initial period arterial hypoxia and hyperoxia were induced on working dog heart by mechanical ventilation with oxygen fractions (fiO2) of 0.1 and 0.5. Under these conditions the effects of low and high fiO2 on oxygenation distribution of intracapillary hemoglobin were investigated. In the second part of the experiment the relation between systemic hematocrit, local hemoglobin concentration, local hemoglobin oxygenation and the oxygen regulation mechanism were studied in detail. In the final part of the experiment the effect of critical coronary stenosis on hb and hbO2 was measured. Critical stenosis was achieved by partial clamping of the left anterior coronary artery (LAD).

  14. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  15. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  16. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  17. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  18. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  19. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  20. Parallel-plate third sound waveguides with fixed and variable plate spacings for the study of fifth sound in superfluid helium

    International Nuclear Information System (INIS)

    Jelatis, G.J.

    1983-01-01

    Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance

  1. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  2. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  3. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  4. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  5. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  6. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  7. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  8. 21 CFR 876.4590 - Interlocking urethral sound.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...

  9. Extended abstracts from the Coastal Habitats in Puget Sound (CHIPS) 2006 Workshop

    Science.gov (United States)

    Gelfenbaum, Guy R.; Fuentes, Tracy L.; Duda, Jeffrey J.; Grossman, Eric E.; Takesue, Renee K.

    2010-01-01

    Puget Sound is the second largest estuary in the United States. Its unique geology, climate, and nutrient-rich waters produce and sustain biologically productive coastal habitats. These same natural characteristics also contribute to a high quality of life that has led to a significant growth in human population and associated development. This population growth, and the accompanying rural and urban development, has played a role in degrading Puget Sound ecosystems, including declines in fish and wildlife populations, water-quality issues, and loss and degradation of coastal habitats.In response to these ecosystem declines and the potential for strategic large-scale preservation and restoration, a coalition of local, State, and Federal agencies, including the private sector, Tribes, and local universities, initiated the Puget Sound Nearshore Ecosystem Restoration Project (PSNERP). The Nearshore Science Team (NST) of PSNERP, along with the U.S. Geological Survey, developed a Science Strategy and Research Plan (Gelfenbaum and others, 2006) to help guide science activities associated with nearshore ecosystem restoration. Implementation of the Research Plan includes a call for State and Federal agencies to direct scientific studies to support PSNERP information needs. In addition, the overall Science Strategy promotes greater communication with decision makers and dissemination of scientific results to the broader scientific community.On November 14–16, 2006, the U.S. Geological Survey sponsored an interdisciplinary Coastal Habitats in Puget Sound (CHIPS) Research Workshop at Fort Worden State Park, Port Townsend, Washington. The main goals of the workshop were to coordinate, integrate, and link research on the nearshore of Puget Sound. Presented research focused on three themes: (1) restoration of large river deltas; (2) recovery of the nearshore ecosystem of the Elwha River; and (3) effects of urbanization on nearshore ecosystems. The more than 35 presentations

  10. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    Science.gov (United States)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  11. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  12. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  13. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  14. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  15. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  16. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  17. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  18. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  19. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  20. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  1. Sounds in one-dimensional superfluid helium

    International Nuclear Information System (INIS)

    Um, C.I.; Kahng, W.H.; Whang, E.H.; Hong, S.K.; Oh, H.G.; George, T.F.

    1989-01-01

    The temperature variations of first-, second-, and third-sound velocity and attenuation coefficients in one-dimensional superfluid helium are evaluated explicitly for very low temperatures and frequencies (ω/sub s/tau 2 , and the ratio of second sound to first sound becomes unity as the temperature decreases to absolute zero

  2. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  3. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  4. The stability of second sound waves in a rotating Darcy–Brinkman porous layer in local thermal non-equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Eltayeb, I A; Elbashir, T B A, E-mail: ieltayeb@squ.edu.om, E-mail: elbashir@squ.edu.om [Department of Mathematics and Statistics, College of Science, Sultan Qaboos University, Muscat 123 (Oman)

    2017-08-15

    The linear and nonlinear stabilities of second sound waves in a rotating porous Darcy–Brinkman layer in local thermal non-equilibrium are studied when the heat flux in the solid obeys the Cattaneo law. The simultaneous action of the Brinkman effect (effective viscosity) and rotation is shown to destabilise the layer, as compared to either of them acting alone, for both stationary and overstable modes. The effective viscosity tends to favour overstable modes while rotation tends to favour stationary convection. Rapid rotation invokes a negative viscosity effect that suppresses the stabilising effect of porosity so that the stability characteristics resemble those of the classical rotating Benard layer. A formal weakly nonlinear analysis yields evolution equations of the Landau–Stuart type governing the slow time development of the amplitudes of the unstable waves. The equilibrium points of the evolution equations are analysed and the overall development of the amplitudes is examined. Both overstable and stationary modes can exhibit supercritical stability; supercritical instability, subcritical instability and stability are not possible. The dependence of the supercritical stability on the relative values of the six dimensionless parameters representing thermal non-equilibrium, rotation, porosity, relaxation time, thermal diffusivities and Brinkman effect is illustrated as regions in regime diagrams in the parameter space. The dependence of the heat transfer and the mean heat flux on the parameters of the problem is also discussed. (paper)

  5. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  6. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  7. Three-dimensional MR imaging of congenital heart disease

    International Nuclear Information System (INIS)

    Laschinger, J.C.; Vannier, M.W.; Knapp, R.H.; Gutierrez, F.R.; Cox, J.L.

    1987-01-01

    Contiguous 5-mm thick ECG-gated MR images of the thorax were edited using surface reconstruction techniques to produce three-dimensional (3D) images of the heart and great vessels in four healthy individuals and 25 patients with congenital heart disease (aged 3 months-30 years). Anomalies studied include atrial and ventricular septal defects, aortic coarctation, AV canal defects, double outlet ventricles, hypoplastic left heart syndrome, and a wide spectrum of patients with tetralogy of Fallot. The results were correlated with echocardiographic and cineradiographic studies, and with surgical findings or pathologic specimens. Three-dimensional reconstructions accurately localized the dimensions and locations of all cardiac and great vessel anomalies and often displayed anatomic findings not diagnosed or visualized with other forms of diagnostic imaging

  8. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  9. Preattentive processing of heart cues and the perception of heart symptoms in congenital heart disease.

    Science.gov (United States)

    Karsdorp, Petra A; Kindt, Merel; Everaerd, Walter; Mulder, Barbara J M

    2007-08-01

    The present study was aimed at clarifying whether preattentive processing of heart cues results in biased perception of heart sensations in patients with congenital heart disease (ConHD) who are also highly trait anxious. Twenty-six patients with ConHD and 22 healthy participants categorized heart-related (heart rate) or neutral sensations (constant vibration) as either heart or neutral. Both sensations were evoked using a bass speaker that was attached on the chest of the participant. Before each physical sensation, a subliminal heart-related or neutral prime was presented. Biased perception of heart-sensations would become evident by a delayed categorization of the heart-related sensations. In line with the prediction, a combination of high trait anxiety and ConHD resulted in slower responses after a heart-related sensation that was preceded by a subliminal heart cue. Preattentive processing of harmless heart cues may easily elicit overperception of heart symptoms in highly trait anxious patients with ConHD.

  10. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  11. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  12. WODA Technical Guidance on Underwater Sound from Dredging.

    Science.gov (United States)

    Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders

    2016-01-01

    The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.

  13. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  14. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  15. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  16. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  17. On the sound absorption coefficient of porous asphalt pavements for oblique incident sound waves

    NARCIS (Netherlands)

    Bezemer-Krijnen, Marieke; Wijnant, Ysbrand H.; de Boer, Andries; Bekke, Dirk; Davy, J.; Don, Ch.; McMinn, T.; Dowsett, L.; Broner, N.; Burgess, M.

    2014-01-01

    A rolling tyre will radiate noise in all directions. However, conventional measurement techniques for the sound absorption of surfaces only give the absorption coefficient for normal incidence. In this paper, a measurement technique is described with which it is possible to perform in situ sound

  18. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  19. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  20. Heart transplantation in adults with congenital heart disease.

    Science.gov (United States)

    Houyel, Lucile; To-Dumortier, Ngoc-Tram; Lepers, Yannick; Petit, Jérôme; Roussin, Régine; Ly, Mohamed; Lebret, Emmanuel; Fadel, Elie; Hörer, Jürgen; Hascoët, Sébastien

    2017-05-01

    With the advances in congenital cardiac surgery and postoperative care, an increasing number of children with complex congenital heart disease now reach adulthood. There are already more adults than children living with a congenital heart defect, including patients with complex congenital heart defects. Among these adults with congenital heart disease, a significant number will develop ventricular dysfunction over time. Heart failure accounts for 26-42% of deaths in adults with congenital heart defects. Heart transplantation, or heart-lung transplantation in Eisenmenger syndrome, then becomes the ultimate therapeutic possibility for these patients. This population is deemed to be at high risk of mortality after heart transplantation, although their long-term survival is similar to that of patients transplanted for other reasons. Indeed, heart transplantation in adults with congenital heart disease is often challenging, because of several potential problems: complex cardiac and vascular anatomy, multiple previous palliative and corrective surgeries, and effects on other organs (kidney, liver, lungs) of long-standing cardiac dysfunction or cyanosis, with frequent elevation of pulmonary vascular resistance. In this review, we focus on the specific problems relating to heart and heart-lung transplantation in this population, revisit the indications/contraindications, and update the long-term outcomes. Copyright © 2017. Published by Elsevier Masson SAS.

  1. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  2. Estimation of probability of coastal flooding: A case study in the Norton Sound, Alaska

    Science.gov (United States)

    Kim, S.; Chapman, R. S.; Jensen, R. E.; Azleton, M. T.; Eisses, K. J.

    2010-12-01

    Along the Norton Sound, Alaska, coastal communities have been exposed to flooding induced by the extra-tropical storms. Lack of observation data especially with long-term variability makes it difficult to assess the probability of coastal flooding critical in planning for development and evacuation of the coastal communities. We estimated the probability of coastal flooding with the help of an existing storm surge model using ADCIRC and a wave model using WAM for the Western Alaska which includes the Norton Sound as well as the adjacent Bering Sea and Chukchi Sea. The surface pressure and winds as well as ice coverage was analyzed and put in a gridded format with 3 hour interval over the entire Alaskan Shelf by Ocean Weather Inc. (OWI) for the period between 1985 and 2009. The OWI also analyzed the surface conditions for the storm events over the 31 year time period between 1954 and 1984. The correlation between water levels recorded by NOAA tide gage and local meteorological conditions at Nome between 1992 and 2005 suggested strong local winds with prevailing Southerly components period are good proxies for high water events. We also selected heuristically the local winds with prevailing Westerly components at Shaktoolik which locates at the eastern end of the Norton Sound provided extra selection of flood events during the continuous meteorological data record between 1985 and 2009. The frequency analyses were performed using the simulated water levels and wave heights for the 56 year time period between 1954 and 2009. Different methods of estimating return periods were compared including the method according to FEMA guideline, the extreme value statistics, and fitting to the statistical distributions such as Weibull and Gumbel. The estimates are similar as expected but with a variation.

  3. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  4. High frequency ion sound waves associated with Langmuir waves in type III radio burst source regions

    Directory of Open Access Journals (Sweden)

    G. Thejappa

    2004-01-01

    Full Text Available Short wavelength ion sound waves (2-4kHz are detected in association with the Langmuir waves (~15-30kHz in the source regions of several local type III radio bursts. They are most probably not due to any resonant wave-wave interactions such as the electrostatic decay instability because their wavelengths are much shorter than those of Langmuir waves. The Langmuir waves occur as coherent field structures with peak intensities exceeding the Langmuir collapse thresholds. Their scale sizes are of the order of the wavelength of an ion sound wave. These Langmuir wave field characteristics indicate that the observed short wavelength ion sound waves are most probably generated during the thermalization of the burnt-out cavitons left behind by the Langmuir collapse. Moreover, the peak intensities of the observed short wavelength ion sound waves are comparable to the expected intensities of those ion sound waves radiated by the burnt-out cavitons. However, the speeds of the electron beams derived from the frequency drift of type III radio bursts are too slow to satisfy the needed adiabatic ion approximation. Therefore, some non-linear process such as the induced scattering on thermal ions most probably pumps the beam excited Langmuir waves towards the lower wavenumbers, where the adiabatic ion approximation is justified.

  5. Women's Heart Disease: Heart Attack Symptoms

    Science.gov (United States)

    ... of this page please turn JavaScript on. Feature: Women's Heart Disease Heart Attack Symptoms Past Issues / Winter ... most common heart attack symptom in men and women is chest pain or discomfort. However, women also ...

  6. A caudal proliferating growth center contributes to both poles of the forming heart tube

    NARCIS (Netherlands)

    van den Berg, G.; Abu-Issa, R.; de Boer, B.A.; Hutson, M.R.; de Boer, P.A.J.; Soufan, A.T.; Ruijter, J.M.; Kirby, M.L.; van den Hoff, M.J.B.; Moorman, A.F.M.

    2009-01-01

    Recent studies have shown that the primary heart tube continues to grow by addition of cells from the coelomic wall. This growth occurs concomitantly with embryonic folding and formation of the coelomic cavity, making early heart formation morphologically complex. A scarcity of data on localized

  7. Linking evidence to action on social determinants of health using Urban HEART in the Americas.

    Science.gov (United States)

    Prasad, Amit; Groot, Ana Maria Mahecha; Monteiro, Teofilo; Murphy, Kelly; O'Campo, Patricia; Broide, Emilia Estivalet; Kano, Megumi

    2013-12-01

    To evaluate the experience of select cities in the Americas using the Urban Health Equity Assessment and Response Tool (Urban HEART) launched by the World Health Organization in 2010 and to determine its utility in supporting government efforts to improve health equity using the social determinants of health (SDH) approach. The Urban HEART experience was evaluated in four cities from 2010-2013: Guarulhos (Brazil), Toronto (Canada), and Bogotá and Medellín (Colombia). Reports were submitted by Urban HEART teams in each city and supplemented by first-hand accounts of key informants. The analysis considered each city's networks and the resources it used to implement Urban HEART; the process by which each city identified equity gaps and prioritized interventions; and finally, the facilitators and barriers encountered, along with next steps. In three cities, local governments spearheaded the process, while in the fourth (Toronto), academia initiated and led the process. All cities used Urban HEART as a platform to engage multiple stakeholders. Urban HEART's Matrix and Monitor were used to identify equity gaps within cities. While Bogotá and Medellín prioritized among existing interventions, Guarulhos adopted new interventions focused on deprived districts. Actions were taken on intermediate determinants, e.g., health systems access, and structural SDH, e.g., unemployment and human rights. Urban HEART provides local governments with a simple and systematic method for assessing and responding to health inequity. Through the SDH approach, the tool has provided a platform for intersectoral action and community involvement. While some areas of guidance could be strengthened, Urban HEART is a useful tool for directing local action on health inequities, and should be scaled up within the Region of the Americas, building upon current experience.

  8. Linking evidence to action on social determinants of health using Urban HEART in the Americas

    Directory of Open Access Journals (Sweden)

    Amit Prasad

    2013-12-01

    Full Text Available OBJECTIVE: To evaluate the experience of select cities in the Americas using the Urban Health Equity Assessment and Response Tool (Urban HEART launched by the World Health Organization in 2010 and to determine its utility in supporting government efforts to improve health equity using the social determinants of health (SDH approach METHODS: The Urban HEART experience was evaluated in four cities from 2010-2013: Guarulhos (Brazil, Toronto (Canada, and Bogotá and Medellín (Colombia. Reports were submitted by Urban HEART teams in each city and supplemented by first-hand accounts of key informants. The analysis considered each city's networks and the resources it used to implement Urban HEART; the process by which each city identified equity gaps and prioritized interventions; and finally, the facilitators and barriers encountered, along with next steps RESULTS: In three cities, local governments spearheaded the process, while in the fourth (Toronto, academia initiated and led the process. All cities used Urban HEART as a platform to engage multiple stakeholders. Urban HEART's Matrix and Monitor were used to identify equity gaps within cities. While Bogotá and Medellín prioritized among existing interventions, Guarulhos adopted new interventions focused on deprived districts. Actions were taken on intermediate determinants, e.g., health systems access, and structural SDH, e.g., unemployment and human rights CONCLUSIONS: Urban HEART provides local governments with a simple and systematic method for assessing and responding to health inequity. Through the SDH approach, the tool has provided a platform for intersectoral action and community involvement. While some areas of guidance could be strengthened, Urban HEART is a useful tool for directing local action on health inequities, and should be scaled up within the Region of the Americas, building upon current experience.

  9. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  10. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  11. A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling

    Science.gov (United States)

    Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit

    2014-09-01

    When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.

  12. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  13. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  14. Local experience on radionuclide myocardial imaging in the Philippines at the Philippine Heart Center for Asia

    International Nuclear Information System (INIS)

    Villacorta, E.V.

    1977-01-01

    The Nuclear Medicine Department of the Philippine Heart Center has introduced the detection of coronary heart disease through myocardiac perfusion imaging. The cardiovascular procedures are availed of free-of-charge to registered PHCA patients excepting for the costly TI-201 imaging. In summary, coronary perfusion in imaging should be an integral part of coronary arteriography. Barring the expensive cost of TI-120, myocardial perfusion imaging is ideal for detection of coronary heart disease. Experience shows better sensitivity of TI-201 than exercise ECG for detection of ischemia. Another non-invasive procedure for the detection of acute infarction is the radionuclide imaging using a bone radiopharmaceutical Tc99m prophosphate. In conclusion, acute infarct imaging is a valuable adjunct to ECG and enzyme studies. (RTD)

  15. Population diversity in Pacific herring of the Puget Sound, USA.

    Science.gov (United States)

    Siple, Margaret C; Francis, Tessa B

    2016-01-01

    Demographic, functional, or habitat diversity can confer stability on populations via portfolio effects (PEs) that integrate across multiple ecological responses and buffer against environmental impacts. The prevalence of these PEs in aquatic organisms is as yet unknown, and can be difficult to quantify; however, understanding mechanisms that stabilize populations in the face of environmental change is a key concern in ecology. Here, we examine PEs in Pacific herring (Clupea pallasii) in Puget Sound (USA) using a 40-year time series of biomass data for 19 distinct spawning population units collected using two survey types. Multivariate auto-regressive state-space models show independent dynamics among spawning subpopulations, suggesting that variation in herring production is partially driven by local effects at spawning grounds or during the earliest life history stages. This independence at the subpopulation level confers a stabilizing effect on the overall Puget Sound spawning stock, with herring being as much as three times more stable in the face of environmental perturbation than a single population unit of the same size. Herring populations within Puget Sound are highly asynchronous but share a common negative growth rate and may be influenced by the Pacific Decadal Oscillation. The biocomplexity in the herring stock shown here demonstrates that preserving spatial and demographic diversity can increase the stability of this herring population and its availability as a resource for consumers.

  16. Acoustic source localization : Exploring theory and practice

    NARCIS (Netherlands)

    Wind, Jelmer

    2009-01-01

    Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of

  17. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  18. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  19. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  20. PRESSURE EQUILIBRIUM BETWEEN THE LOCAL INTERSTELLAR CLOUDS AND THE LOCAL HOT BUBBLE

    Energy Technology Data Exchange (ETDEWEB)

    Snowden, S. L.; Chiao, M.; Collier, M. R.; Porter, F. S.; Thomas, N. E. [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Cravens, T.; Robertson, I. P. [Department of Physics and Astronomy, University of Kansas, 1251 Wescoe Hall Drive, Lawrence, KS 66045 (United States); Galeazzi, M.; Uprety, Y.; Ursino, E. [Department of Physics, University of Miami, 1320 Campo Sano Drive, Coral Gables, FL 33146 (United States); Koutroumpa, D. [Université Versailles St-Quentin, Sorbonne Universités, UPMC Univ. Paris 06, CNRS/INSU, LATMOS-IPSL, 11 Boulevard d' Alembert, F-78280 Guyancourt (France); Kuntz, K. D. [The Henry A. Rowland Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218 (United States); Lallement, R.; Puspitarini, L. [GEPI, Observatoire de Paris, CNRS UMR8111, Université Paris Diderot, 5 Place Jules Janssen, F-92190 Meudon (France); Lepri, S. T. [University of Michigan, 2455 Hayward Street, Ann Arbor, MI 48109 (United States); McCammon, D.; Morgan, K. [Department of Physics, University of Wisconsin, 1150 University Avenue, Madison, WI 53706 (United States); Walsh, B. M., E-mail: steven.l.snowden@nasa.gov [Space Sciences Laboratory, 7 Gauss Way, Berkeley, CA 94720 (United States)

    2014-08-10

    Three recent results related to the heliosphere and the local interstellar medium (ISM) have provided an improved insight into the distribution and conditions of material in the solar neighborhood. These are the measurement of the magnetic field outside of the heliosphere by Voyager 1, the improved mapping of the three-dimensional structure of neutral material surrounding the Local Cavity using extensive ISM absorption line and reddening data, and a sounding rocket flight which observed the heliospheric helium focusing cone in X-rays and provided a robust estimate of the contribution of solar wind charge exchange emission to the ROSAT All-Sky Survey 1/4 keV band data. Combining these disparate results, we show that the thermal pressure of the plasma in the Local Hot Bubble (LHB) is P/k = 10, 700 cm{sup –3} K. If the LHB is relatively free of a global magnetic field, it can easily be in pressure (thermal plus magnetic field) equilibrium with the local interstellar clouds, eliminating a long-standing discrepancy in models of the local ISM.