WorldWideScience

Sample records for sound recording

  1. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  2. Sound and recording applications and theory

    CERN Document Server

    Rumsey, Francis

    2014-01-01

    Providing vital reading for audio students and trainee engineers, this guide is ideal for anyone who wants a solid grounding in both theory and industry practices in audio, sound and recording. There are many books on the market covering ""how to work it"" when it comes to audio equipment-but Sound and Recording isn't one of them. Instead, you'll gain an understanding of ""how it works"" with this approachable guide to audio systems.New to this edition:Digital audio section revised substantially to include the latest developments in audio networking (e.g. RAVENNA, AES X-192, AVB), high-resolut

  3. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  4. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  5. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  6. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  7. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    Science.gov (United States)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  8. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  9. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  10. 75 FR 3666 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-01-22

    ... additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound recordings.... 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in general... Collective, late fees, statements of account, audit and verification of royalty payments and distributions...

  11. 37 CFR 380.3 - Royalty fees for the public performance of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... EPHEMERAL REPRODUCTIONS § 380.3 Royalty fees for the public performance of sound recordings and for ephemeral recordings. (a) Royalty rates and fees for eligible digital transmissions of sound recordings made...

  12. The Technique of the Sound Studio: Radio, Record Production, Television, and Film. Revised Edition.

    Science.gov (United States)

    Nisbett, Alec

    Detailed explanations of the studio techniques used in radio, record, television, and film sound production are presented in as non-technical language as possible. An introductory chapter discusses the physics and physiology of sound. Subsequent chapters detail standards for sound control in the studio; explain the planning and routine of a sound…

  13. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  14. Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces.

    Science.gov (United States)

    Ekman, Maria Rådsten; Lundén, Peter; Nilsson, Mats E

    2015-11-01

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-field microphone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  15. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  16. Multi-Century Record of Anthropogenic Impacts on an Urbanized Mesotidal Estuary: Salem Sound, MA

    Science.gov (United States)

    Salem, MA, located north of Boston, has a rich, well-documented history dating back to settlement in 1626 CE, but the associated anthropogenic impacts on Salem Sound are poorly constrained. This project utilized dated sediment cores from the sound to assess the proxy record of an...

  17. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Science.gov (United States)

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  18. 37 CFR 270.1 - Notice of use of sound recordings under statutory license.

    Science.gov (United States)

    2010-07-01

    ..., and the primary purpose of the service is not to sell, advertise, or promote particular products or services other than sound recordings, live concerts, or other music-related events. (iv) A new subscription...

  19. 37 CFR 261.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... § 261.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) For the period October 28, 1998, through December 31, 2002, royalty rates and fees for eligible digital...

  20. 37 CFR 262.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... MAKING OF EPHEMERAL REPRODUCTIONS § 262.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) Basic royalty rate. Royalty rates and fees for eligible nonsubscription...

  1. 37 CFR 382.12 - Royalty fees for the public performance of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... Preexisting Satellite Digital Audio Radio Services § 382.12 Royalty fees for the public performance of sound recordings and the making of ephemeral recordings. (a) In general. The monthly royalty fee to be paid by a...

  2. Graphic recording of heart sounds in height native subjects

    OpenAIRE

    Rotta, Andrés; Ascenzo C., Jorge

    2014-01-01

    The phonocardiograms series obtained from normal subjects show that it is not always possible to record the noises Headset and 3rd, giving diverse enrollment rates by different authors. The reason why the graphic registration fails these noises largely normal individuals has not yet been explained in concrete terms, but allowed different influencing factors such as age, determinants of noises, terms of transmissibility chest wall sensitivity of the recording apparatus, etc. Los fonocardiog...

  3. 37 CFR 383.3 - Royalty fees for public performances of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... SUBSCRIPTION SERVICES § 383.3 Royalty fees for public performances of sound recordings and the making of... regulations for all years 2007 and earlier. Such fee shall be recoupable and credited against royalties due in...

  4. Beaming teaching application: recording techniques for spatial xylophone sound rendering

    DEFF Research Database (Denmark)

    Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup

    2012-01-01

    BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...... to spatial improvements mainly in terms of the Apparent Source Width (ASW). Rendered examples are subjectively evaluated in listening tests by comparing them with binaural recording....

  5. Design of an Automatic Octave Sound Analyzer and Recorder

    Science.gov (United States)

    1942-11-21

    e Fredric Flader Henry K. Growald Mr. A. E. Raymond Mr. E. P. Wheaton El Segundo, California Mr. Paul Dennis Fairchild Aircraft Division...Dr. E. B« I-’oots Dr. R. H. Nichols, Jr\\ Mr. H. ■v. RudiTiose Mr. R. L. Wallace , Jr. Dr. P. M. Wiener Mr. H. F. Dienel Mr. H. L. Eri c :J on Mr...25 43 I / Recorder Motor "ON-OFF Switch^//’] \\ ^ti Indexing Switch Mazda 47 B j MUT Pilot Light -Jjv k4: 10A. \\A Start Switch ch

  6. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  7. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  8. 37 CFR 370.3 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  9. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. 75 FR 14074 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-03-24

    ...). The additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound... Sec. 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in... payments to the Collective, late fees, statements of account, audit and verification of royalty payments...

  11. 76 FR 56483 - Distribution of 2010 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2011-09-13

    ... responses to the motion to ascertain whether any claimant entitled to receive such royalty fees has a... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2011-6 CRB DD 2010] Distribution of 2010 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  12. 77 FR 47120 - Distribution of 2011 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2012-08-07

    ... the motion to ascertain whether any claimant entitled to receive such royalty fees has a reasonable... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2012-3 CRB DD 2011] Distribution of 2011 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  13. 76 FR 45695 - Notice and Recordkeeping for Use of Sound Recordings Under Statutory License

    Science.gov (United States)

    2011-08-01

    ... operating under these licenses are required to, among other things, pay royalty fees and report to copyright... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Parts 370 and 382 [Docket No. RM 2011-5] Notice and Recordkeeping for Use of Sound Recordings Under Statutory License AGENCY: Copyright Royalty Board...

  14. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  15. Learning with Sound Recordings: A History of Suzuki's Mediated Pedagogy

    Science.gov (United States)

    Thibeault, Matthew D.

    2018-01-01

    This article presents a history of mediated pedagogy in the Suzuki Method, the first widespread approach to learning an instrument in which sound recordings were central. Media are conceptualized as socially constituted: philosophical ideas, pedagogic practices, and cultural values that together form a contingent and changing technological…

  16. 37 CFR 260.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d..., 2007, a Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U...

  17. Surround by Sound: A Review of Spatial Audio Recording and Reproduction

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2017-05-01

    Full Text Available In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problems.

  18. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population.

    Science.gov (United States)

    Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe

    2016-03-01

    Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. 75 FR 16377 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-04-01

    ...). Petitions to Participate were received from: Intercollegiate Broadcast System, Inc./ Harvard Radio...), respectively, and the references to January 1, 2009, have been deleted. Next, for the reasons stated above in... State. (j) Retention of records. Books and records of a Broadcaster and of the Collective relating to...

  20. Acoustic analysis of snoring sounds recorded with a smartphone according to obstruction site in OSAS patients.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Kim, Yang Jae; Moon, J I Seung; Kim, Young Jun; Jung, Sung Hoon

    2017-03-01

    Snoring is a sign of increased upper airway resistance and is the most common symptom suggestive of obstructive sleep apnea. Acoustic analysis of snoring sounds is a non-invasive diagnostic technique and may provide a screening test that can determine the location of obstruction sites. We recorded snoring sounds according to obstruction level, measured by DISE, using a smartphone and focused on the analysis of formant frequencies. The study group comprised 32 male patients (mean age 42.9 years). The spectrogram pattern, intensity (dB), fundamental frequencies (F 0 ), and formant frequencies (F 1 , F 2 , and F 3 ) of the snoring sounds were analyzed for each subject. On spectrographic analysis, retropalatal level obstruction tended to produce sharp and regular peaks, while retrolingual level obstruction tended to show peaks with a gradual onset and decay. On formant frequency analysis, F 1 (retropalatal level vs. retrolingual level: 488.1 ± 125.8 vs. 634.7 ± 196.6 Hz) and F 2 (retropalatal level vs. retrolingual level: 1267.3 ± 306.6 vs. 1723.7 ± 550.0 Hz) of retrolingual level obstructions showed significantly higher values than retropalatal level obstruction (p smartphone can be effective for recording snoring sounds.

  1. Computer analysis of sound recordings from two Anasazi sites in northwestern New Mexico

    Science.gov (United States)

    Loose, Richard

    2002-11-01

    Sound recordings were made at a natural outdoor amphitheater in Chaco Canyon and in a reconstructed great kiva at Aztec Ruins. Recordings included computer-generated tones and swept sine waves, classical concert flute, Native American flute, conch shell trumpet, and prerecorded music. Recording equipment included analog tape deck, digital minidisk recorder, and direct digital recording to a laptop computer disk. Microphones and geophones were used as transducers. The natural amphitheater lies between the ruins of Pueblo Bonito and Chetro Ketl. It is a semicircular arc in a sandstone cliff measuring 500 ft. wide and 75 ft. high. The radius of the arc was verified with aerial photography, and an acoustic ray trace was generated using cad software. The arc is in an overhanging cliff face and brings distant sounds to a line focus. Along this line, there are unusual acoustic effects at conjugate foci. Time history analysis of recordings from both sites showed that a 60-dB reverb decay lasted from 1.8 to 2.0 s, nearly ideal for public performances of music. Echoes from the amphitheater were perceived to be upshifted in pitch, but this was not seen in FFT analysis. Geophones placed on the floor of the great kiva showed a resonance at 95 Hz.

  2. Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Nonaka, Yulri; Katahira, Kentaro; Okanoya, Kazuo

    2011-11-01

    The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants.

  3. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  4. NOAA Climate Data Record (CDR) of Advanced Microwave Sounding Unit (AMSU)-A Brightness Temperature, Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Climate Data Record (CDR) for Advanced Microwave Sounding Unit-A (AMSU-A) brightness temperature in "window channels". The data cover a time period from...

  5. [Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918] / Tõnu Tannberg

    Index Scriptorium Estoniae

    Tannberg, Tõnu, 1961-

    2013-01-01

    Arvustus: Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918 (Das Baltikum in Geschichte und Gegenwart, 5). Hrsg. von Jaan Ross. Böhlau Verlag. Köln, Weimar und Wien 2012

  6. Why live recording sounds better: A case study of Schumann’s Träumerei

    Directory of Open Access Journals (Sweden)

    Haruka eShoda

    2015-01-01

    Full Text Available We explore the concept that artists perform best in front of an audience. The negative effects of performance anxiety are much better known than their related cousin on the other shoulder: the positive effects of social facilitation. The present study, however, reveals a listener's preference for performances recorded in front of an audience. In Study 1, we prepared two types of recordings of Träumerei performed by 13 pianists: recordings in front of an audience and those with no audience. According to the evaluation by 153 listeners, the recordings performed in front of an audience sounded better, suggesting that the presence of an audience enhanced or facilitated the performance. In Study 2, we analyzed pianists' durational and dynamic expressions. According to the functional principal components analyses, we found that the expression of Träumerei consisted of three components: the overall quantity, the cross-sectional contrast between the final and the remaining sections, and the control of the expressive variability. Pianists' expressions were targeted more to the average of the cross-sectional variation in the audience-present than in the audience-absent recordings. In Study 3, we explored a model that explained listeners' responses induced by pianists' acoustical expressions, using path analyses. The final model indicated that the cross-sectional variation of the duration and that of the dynamics determined listeners' evaluations of the quality and the emotionally moving experience, respectively. In line with human's preferences for commonality, the more average the durational expressions were in live recording, the better the listeners' evaluations were regardless of their musical experiences. Only the well-experienced listeners (at least 16 years of musical training were moved more by the deviated dynamic expressions in live recording, suggesting a link between the experienced listener's emotional experience and the unique dynamics in

  7. 37 CFR 382.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.2 Royalty fees for the... monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d)(2) and the...

  8. Sound production in recorder-like instruments : II. a simulation model

    NARCIS (Netherlands)

    Verge, M.P.; Hirschberg, A.; Causse, R.

    1997-01-01

    A simple one-dimensional representation of recorderlike instruments, that can be used for sound synthesis by physical modeling of flutelike instruments, is presented. This model combines the effects on the sound production by the instrument of the jet oscillations, vortex shedding at the edge of the

  9. 75 FR 67777 - Copyright Office; Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Science.gov (United States)

    2010-11-03

    ... (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., spoken, or other sounds, but not including the sounds accompanying a motion picture or other audiovisual... general, Federal law is better defined, both as to the rights and the exceptions, and more consistent than...

  10. Reconstruction of mechanically recorded sound from an edison cylinder using three dimensional non-contact optical surface metrology

    Energy Technology Data Exchange (ETDEWEB)

    Fadeyev, V.; Haber, C.; Maul, C.; McBride, J.W.; Golden, M.

    2004-04-20

    Audio information stored in the undulations of grooves in a medium such as a phonograph disc record or cylinder may be reconstructed, without contact, by measuring the groove shape using precision optical metrology methods and digital image processing. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image acquisition and analysis methods. The present work reports the first three dimensional reconstruction of mechanically recorded sound. The source material, a celluloid cylinder, was scanned using color coded confocal microscopy techniques and resulted in a faithful playback of the recorded information.

  11. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  12. Gateway of Sound: Reassessing the Role of Audio Mastering in the Art of Record Production

    Directory of Open Access Journals (Sweden)

    Carlo Nardi

    2014-06-01

    Full Text Available Audio mastering, notwithstanding an apparent lack of scholarly attention, is a crucial gateway between production and consumption and, as such, is worth further scrutiny, especially in music genres like house or techno, which place great emphasis on sound production qualities. In this article, drawing on personal interviews with mastering engineers and field research in mastering studios in Italy and Germany, I investigate the practice of mastering engineering, paying close attention to the negotiation of techniques and sound aesthetics in relation to changes in the industry formats and, in particular, to the growing shift among DJs from vinyl to compressed digital formats. I then discuss the specificity of audio mastering in relation to EDM, insofar as DJs and controllerists conceive of the master, rather than as a finished product destined to listening, as raw material that can be reworked in performance.

  13. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0088063)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  14. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0077910)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  15. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    Science.gov (United States)

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  16. Saved from the Teeth of Time. Folk music on historical sound recordings

    Czech Academy of Sciences Publication Activity Database

    Kratochvíl, Matěj

    2007-01-01

    Roč. 10, č. 3 (2007), s. 24-26 ISSN 1211-0264 Institutional research plan: CEZ:AV0Z90580513 Keywords : traditional music * recording * wax cylinders * Bohemian music * Moravian music Subject RIV: AC - Archeology, Anthropology, Ethnology

  17. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    Science.gov (United States)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  18. AGGLOMERATIVE CLUSTERING OF SOUND RECORD SPEECH SEGMENTS BASED ON BAYESIAN INFORMATION CRITERION

    Directory of Open Access Journals (Sweden)

    O. Yu. Kydashev

    2013-01-01

    Full Text Available This paper presents the detailed description of agglomerative clustering system implementation for speech segments based on Bayesian information criterion. Numerical experiment results with different acoustic features, as well as the full and diagonal covariance matrices application are given. The error rate DER equal to 6.4% for audio records of radio «Svoboda» was achieved by means of designed system.

  19. Detection of explosive cough events in audio recordings by internal sound analysis.

    Science.gov (United States)

    Rocha, B M; Mendes, L; Couceiro, R; Henriques, J; Carvalho, P; Paiva, R P

    2017-07-01

    We present a new method for the discrimination of explosive cough events, which is based on a combination of spectral content descriptors and pitch-related features. After the removal of near-silent segments, a vector of event boundaries is obtained and a proposed set of 9 features is extracted for each event. Two data sets, recorded using electronic stethoscopes and comprising a total of 46 healthy subjects and 13 patients, were employed to evaluate the method. The proposed feature set is compared to three other sets of descriptors: a baseline, a combination of both sets, and an automatic selection of the best 10 features from both sets. The combined feature set yields good results on the cross-validated database, attaining a sensitivity of 92.3±2.3% and a specificity of 84.7±3.3%. Besides, this feature set seems to generalize well when it is trained on a small data set of patients, with a variety of respiratory and cardiovascular diseases, and tested on a bigger data set of mostly healthy subjects: a sensitivity of 93.4% and a specificity of 83.4% are achieved in those conditions. These results demonstrate that complementing the proposed feature set with a baseline set is a promising approach.

  20. Copyright and Related Issues Relevant to Digital Preservation and Dissemination of Unpublished Pre-1972 Sound Recordings by Libraries and Archives. CLIR Publication No. 144

    Science.gov (United States)

    Besek, June M.

    2009-01-01

    This report addresses the question of what libraries and archives are legally empowered to do to preserve and make accessible for research their holdings of unpublished pre-1972 sound recordings. The report's author, June M. Besek, is executive director of the Kernochan Center for Law, Media and the Arts at Columbia Law School. Unpublished sound…

  1. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  2. NOAA Climate Data Record of Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU-A) Mean Layer Temperature, Version 3.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains three channel-based, monthly gridded atmospheric layer temperature Climate Data Records generated by merging nine MSU NOAA polar orbiting...

  3. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  4. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    Science.gov (United States)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did

  5. Design of a Multi-Week Sound and Motion Recording and Telemetry (SMRT) Tag for Behavioral Studies on Whales

    Science.gov (United States)

    2015-09-30

    Computers to develop a medium-term attachment method for cetaceans involving a set of short barbed darts that anchor in the dermis. In the current project...configuration required for the SMRT tag. Ambient noise monitoring Work is advancing on a paper describing an in situ processing method, developed...in a previous ONR project, for estimating the ambient noise from tag sound samples. In this paper we show that a modified form of spectral analysis

  6. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  7. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  8. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  9. Landslides and megathrust splay faults captured by the late Holocene sediment record of eastern Prince William Sound, Alaska

    Science.gov (United States)

    Finn, S.P.; Liberty, Lee M.; Haeussler, Peter J.; Pratt, Thomas L.

    2015-01-01

    We present new marine seismic‐reflection profiles and bathymetric maps to characterize Holocene depositional patterns, submarine landslides, and active faults beneath eastern and central Prince William Sound (PWS), Alaska, which is the eastern rupture patch of the 1964 Mw 9.2 earthquake. We show evidence that submarine landslides, many of which are likely earthquake triggered, repeatedly released along the southern margin of Orca Bay in eastern PWS. We document motion on reverse faults during the 1964 Great Alaska earthquake and estimate late Holocene slip rates for these growth faults, which splay from the subduction zone megathrust. Regional bathymetric lineations help define the faults that extend 40–70 km in length, some of which show slip rates as great as 3.75  mm/yr. We infer that faults mapped below eastern PWS connect to faults mapped beneath central PWS and possibly onto the Alaska mainland via an en echelon style of faulting. Moderate (Mw>4) upper‐plate earthquakes since 1964 give rise to the possibility that these faults may rupture independently to potentially generate Mw 7–8 earthquakes, and that these earthquakes could damage local infrastructure from ground shaking. Submarine landslides, regardless of the source of initiation, could generate local tsunamis to produce large run‐ups along nearby shorelines. In a more general sense, the PWS area shows that faults that splay from the underlying plate boundary present proximal, perhaps independent seismic sources within the accretionary prism, creating a broad zone of potential surface rupture that can extend inland 150 km or more from subduction zone trenches.

  10. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  11. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  12. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  13. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  14. The Impact of the 1989 Exxon Valdez Oil Spill on Phytoplankton as Evidenced Through the Sedimentary Dinoflagellate Cyst Records in Prince William Sound (Alaska, USA).

    Science.gov (United States)

    Genest, M.; Pospelova, V.; Williams, J. R.; Dellapenna, T.; Mertens, K.; Kuehl, S. A.

    2016-12-01

    Large volumes of crude oil are extracted from marine environments and transported via the sea, putting coastal communities at a greater risk of oils spills. It is therefore crucial for these communities to properly assess the risk. The first step is to understand the effects of such events on the environment, which is limited by the lack of research on the impact of oil spills on phytoplankton. This first-of-its-kind research aims to identify how one of the major groups of phytoplankton, dinoflagellates, have been affected by the 1989 Exxon Valdez oil spill in Prince William Sound (PWS), Alaska. To do this, sedimentary records of dinoflagellate cysts, produced during dinoflagellate reproduction and preserved in the sediment, were analyzed. Two sediment cores were collected from PWS in 2012. The sediments are mainly composed of silt with a small fraction of clay. Both well-dated with 210Pb and 137Cs, the cores have high sedimentation rates, allowing for an annual to biannual resolution. Core 10 has a sedimentation rate of 1.1 cm yr-1 and provides continuous record since 1957, while Core 12 has a sedimentation rate of 1.3 cm yr-1 and spans from 1934. The cores were subsampled every centimeter for a total of 110 samples. Samples were treated using a standard palynological processing technique to extract dinoflagellate cysts and 300 cysts were counted per sample. In both cores, cysts were abundant, diverse and well preserved with the average cyst assemblage being characterized by an equal number of cysts produced by autotrophic and heterotrophic dinoflagellates. Of the 40 dinoflagellate cyst taxa, the most abundant are: Operculodinium centrocarpum and Brigantedinium spp. Other common species are: Spiniferites ramosus, cysts of Pentapharsodinium dalei, Echinidinium delicatum, E. zonneveldiae, E. transparantum, Islandinium minutum, and a thin pale brown Brigantedinium type. Changes in the sedimentary sequence of dinoflagellate cysts were analyzed by determining cyst

  15. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1–59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case–control study

    Science.gov (United States)

    Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid

  16. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  17. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  18. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  19. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  20. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  1. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  2. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  3. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  4. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  5. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  6. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  7. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  8. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  9. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  10. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  11. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  12. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  13. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  14. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  15. Sound Visualisation

    OpenAIRE

    Dolenc, Peter

    2013-01-01

    This thesis contains a description of a construction of subwoofer case that has an extra functionality of being able to produce special visual effects and display visualizations that match the currently playing sound. For this reason, multiple lighting elements made out of LED (Light Emitting Diode) diodes were installed onto the subwoofer case. The lighting elements are controlled by dedicated software that was also developed. The software runs on STM32F4-Discovery evaluation board inside a ...

  16. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  17. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  18. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  19. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    as knowledge based on reflexive practices. I chose ‘health promotion’ as the field for my research as it utilises knowledge produced in several research disciplines, among these both quantitative and qualitative. I mapped out the institutions, actors, events, and documents that constituted the field of health...... of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... result of a rigorous and standardized research method. However, this anthropological analysis shows that evidence and evidence-based is a hegemonic ‘way of knowing’ that sometimes transposes everyday reasoning into an epistemological form. However, the empirical material shows a variety of understandings...

  20. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  1. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  2. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  4. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  5. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  6. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  7. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  8. US market. Sound below the line

    Energy Technology Data Exchange (ETDEWEB)

    Iken, Joern

    2012-07-01

    The American Wind Energy Association AWEA is publishing warnings almost daily. The lack of political support is endangering jobs. The year 2011 broke no records, but there was a sound plus in expansion figures. (orig.)

  9. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  10. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  11. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  12. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  13. Detecting the temporal structure of sound sequences in newborn infants

    NARCIS (Netherlands)

    Háden, G.P.; Honing, H.; Török, M.; Winkler, I.

    2015-01-01

    Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the

  14. Real, foley or synthetic? An evaluation of everyday walking sounds

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Grani, Francesco

    2013-01-01

    in using foley sounds for a film track. In particular this work focuses on walking sounds: five different scenes of a walking person were video recorded and each video was then mixed with the three different kind of sounds mentioned above. Subjects were asked to recognise and describe the action performed...

  15. Little Sounds

    Directory of Open Access Journals (Sweden)

    Baker M. Bani-Khair

    2017-10-01

    Full Text Available The Spider and the Fly   You little spider, To death you aspire... Or seeking a web wider, To death all walking, No escape you all fighters… Weak and fragile in shape and might, Whatever you see in the horizon, That is destiny whatever sight. And tomorrow the spring comes, And the flowers bloom, And the grasshopper leaps high, And the frogs happily cry, And the flies smile nearby, To that end, The spider has a plot, To catch the flies by his net, A mosquito has fallen down in his net, Begging him to set her free, Out of that prison, To her freedom she aspires, Begging...Imploring...crying,  That is all what she requires, But the spider vows never let her free, His power he admires, Turning blind to light, And with his teeth he shall bite, Leaving her in desperate might, Unable to move from site to site, Tied up with strings in white, Wrapped up like a dead man, Waiting for his grave at night,   The mosquito says, Oh little spider, A stronger you are than me in power, But listen to my words before death hour, Today is mine and tomorrow is yours, No escape from death... Whatever the color of your flower…     Little sounds The Ant The ant is a little creature with a ferocious soul, Looking and looking for more and more, You can simply crush it like dead mold, Or you can simply leave it alone, I wonder how strong and strong they are! Working day and night in a small hole, Their motto is work or whatever you call… A big boon they have and joy in fall, Because they found what they store, A lesson to learn and memorize all in all, Work is something that you should not ignore!   The butterfly: I’m the butterfly Beautiful like a blue clear sky, Or sometimes look like snow, Different in colors, shapes and might, But something to know that we always die, So fragile, weak and thin, Lighter than a glimpse and delicate as light, Something to know for sure… Whatever you have in life and all these fields, You are not happier than a butterfly

  16. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  17. Chaotic dynamics of respiratory sounds

    International Nuclear Information System (INIS)

    Ahlstrom, C.; Johansson, A.; Hult, P.; Ask, P.

    2006-01-01

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D 2 ) and the Kaplan-Yorke dimension (D KY ) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data

  18. Chaotic dynamics of respiratory sounds

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, C. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden) and Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)]. E-mail: christer@imt.liu.se; Johansson, A. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Hult, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden); Ask, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)

    2006-09-15

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D {sub 2}) and the Kaplan-Yorke dimension (D {sub KY}) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data.

  19. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  20. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  1. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  2. THE SOUND OF CINEMA: TECHNOLOGY AND CREATIVITY

    Directory of Open Access Journals (Sweden)

    Poznin Vitaly F.

    2017-12-01

    Full Text Available Technology is a means of creating any product. However, in the onscreen art, it is one of the elements creating the art space of film. Considering the main stages of the development of cinematography, this article explores the influence of technology of sound recording on the creating a special artistic and physical space of film (the beginning of the use a sound in movies; the mastering the artistic means of an audiovisual work; the expansion of the spatial characteristics for the screen sound; and the sound in a modern cinema. Today, thanks to new technologies, the sound in a cinema forms a specific quasirealistic landscape, greatly enhancing the impact on the viewer of the virtual screen images.

  3. Visual bias in subjective assessments of automotive sounds

    DEFF Research Database (Denmark)

    Ellermeier, Wolfgang; Legarth, Søren Vase

    2006-01-01

    In order to evaluate how strong the influence of visual input on sound quality evaluation may be, a naive sample of 20 participants was asked to judge interior automotive sound recordings while simultaneously being exposed to pictures of cars. twenty-two recordings of second-gear acceleration...

  4. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  5. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  6. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  7. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  8. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  9. The cinematic soundscape: conceptualising the use of sound in Indian films

    OpenAIRE

    Budhaditya Chattopadhyay

    2012-01-01

    This article examines the trajectories of sound practice in Indian cinema and conceptualises the use of sound since the advent of talkies. By studying and analysing a number of sound- films from different technological phases of direct recording, magnetic recording and present- day digital recording, the article proposes three corresponding models that are developed on the basis of observations on the use of sound in Indian cinema. These models take their point of departure in specific phases...

  10. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  11. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  12. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  13. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  14. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  15. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  16. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  17. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  18. EUVS Sounding Rocket Payload

    Science.gov (United States)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  19. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  20. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2010-01-01

    We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions, while soundscapes reproduce the characteristic soundmarks...... as well as self-induced interactive sounds simulated using physical models. Results show that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment....

  1. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  2. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  3. SOUND VELOCITY and Other Data from USS STUMP DD-978) (NCEI Accession 9400069)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The sound velocity data in this accession were collected from USS STUMP DD-978 by US Navy. The sound velocity in water is analog profiles data that was recorded in...

  4. The influence of meaning on the perception of speech sounds.

    Science.gov (United States)

    Kazanina, Nina; Phillips, Colin; Idsardi, William

    2006-07-25

    As part of knowledge of language, an adult speaker possesses information on which sounds are used in the language and on the distribution of these sounds in a multidimensional acoustic space. However, a speaker must know not only the sound categories of his language but also the functional significance of these categories, in particular, which sound contrasts are relevant for storing words in memory and which sound contrasts are not. Using magnetoencephalographic brain recordings with speakers of Russian and Korean, we demonstrate that a speaker's perceptual space, as reflected in early auditory brain responses, is shaped not only by bottom-up analysis of the distribution of sounds in his language but also by more abstract analysis of the functional significance of those sounds.

  5. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  6. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  7. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  8. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  9. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  10. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  11. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  12. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  13. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  14. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  15. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  16. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  17. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  18. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  19. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    Science.gov (United States)

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  20. Physics and music the science of musical sound

    CERN Document Server

    White, Harvey E

    2014-01-01

    Comprehensive and accessible, this foundational text surveys general principles of sound, musical scales, characteristics of instruments, mechanical and electronic recording devices, and many other topics. More than 300 illustrations plus questions, problems, and projects.

  1. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  2. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  3. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  4. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  5. Sound Visualization and Holography

    Science.gov (United States)

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  6. Modern recording techniques

    CERN Document Server

    Huber, David Miles

    2013-01-01

    As the most popular and authoritative guide to recording Modern Recording Techniques provides everything you need to master the tools and day to day practice of music recording and production. From room acoustics and running a session to mic placement and designing a studio Modern Recording Techniques will give you a really good grounding in the theory and industry practice. Expanded to include the latest digital audio technology the 7th edition now includes sections on podcasting, new surround sound formats and HD and audio.If you are just starting out or looking for a step up

  7. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  8. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  9. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  10. Investigating the relationship between pressure force and acoustic waveform in footstep sounds

    DEFF Research Database (Denmark)

    Grani, Francesco; Serafin, Stefania; Götzen, Amalia De

    2013-01-01

    In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair o...... of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.......In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair...

  11. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  12. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  13. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  14. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  15. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  16. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  17. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  18. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  19. Otolith research for Puget Sound

    Science.gov (United States)

    Larsen, K.; Reisenbichler, R.

    2007-01-01

    Otoliths are hard structures located in the brain cavity of fish. These structures are formed by a buildup of calcium carbonate within a gelatinous matrix that produces light and dark bands similar to the growth rings in trees. The width of the bands corresponds to environmental factors such as temperature and food availability. As juvenile salmon encounter different environments in their migration to sea, they produce growth increments of varying widths and visible 'checks' corresponding to times of stress or change. The resulting pattern of band variations and check marks leave a record of fish growth and residence time in each habitat type. This information helps Puget Sound restoration by determining the importance of different habitats for the optimal health and management of different salmon populations. The USGS Western Fisheries Research Center (WFRC) provides otolith research findings directly to resource managers who put this information to work.

  20. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  1. 12 CFR 1732.7 - Record hold.

    Science.gov (United States)

    2010-01-01

    ... Banking OFFICE OF FEDERAL HOUSING ENTERPRISE OVERSIGHT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SAFETY AND SOUNDNESS RECORD RETENTION Record Retention Program § 1732.7 Record hold. (a) Definition. For... Enterprise or OFHEO that the Enterprise is to retain records relating to a particular issue in connection...

  2. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  3. Usefulness of bowel sound auscultation: a prospective evaluation.

    Science.gov (United States)

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  4. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  5. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  6. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  7. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  8. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  9. See This Sound

    DEFF Research Database (Denmark)

    Kristensen, Thomas Bjørnsten

    2009-01-01

    Anmeldelse af udstillingen See This Sound på Lentos Kunstmuseum Linz, Østrig, som markerer den foreløbige kulmination på et samarbejde mellem Lentos Kunstmuseum og Ludwig Boltzmann Institute Media.Art.Research. Udover den konkrete udstilling er samarbejdet tænkt som en ambitiøs, tværfaglig...

  10. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  11. Sound of Stockholm

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2013-01-01

    Med sine kun 4 år bag sig er Sound of Stockholm relativt ny i det internationale festival-landskab. Festivalen er efter sigende udsprunget af en større eller mindre frustration over, at den svenske eksperimentelle musikscenes forskellige foreninger og organisationer gik hinanden bedene, og...

  12. Making Sense of Sound

    Science.gov (United States)

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  13. The Sounds of Metal

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    2015-01-01

    Two, I propose that this framework allows for at least a theoretical distinction between the way in which extreme metal – e.g. black metal, doom metal, funeral doom metal, death metal – relates to its sound as music and the way in which much other music may be conceived of as being constituted...

  14. The Universe of Sound

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Sound Scultor, Bill Fontana, the second winner of the Prix Ars Electronica Collide@CERN residency award, and his science inspiration partner, CERN cosmologist Subodh Patil, present their work in art and science at the CERN Globe of Science and Innovation on 4 July 2013 at 19:00.

  15. Urban Sound Ecologies

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2013-01-01

    . The article concludes that the ways in which recent sound installations work with urban ecologies vary. While two of the examples blend into the urban environment, the other transfers the concert format and its mode of listening to urban space. Last, and in accordance with recent soundscape research, we point...

  16. 浅谈高职院校录音专业毕业生从事音乐编辑行业的职业定位%A Brief Discussion on the Career Orientation of Higher Vocational Sound Recording Graduates Engaged in Music Edition

    Institute of Scientific and Technical Information of China (English)

    郑晓钰

    2015-01-01

    文章分析了高职院校录音专业学生,毕业后从事音乐编辑工种的职业定位.从毕业生自身技能出发,剖析毕业生所具备的技能不同、就业趋向不同,从事的音乐编辑工种的差异,提出着重培养"专一型"和"通才型"的专业技术人才.%This paper analyzes the career orientation of higher vo-cational sound recording graduates engaged in music edition. Starting from the individual skills of the graduates, this paper points out that different skills determine different careers and the types of music edition are also different. Then the writer proposes the emphasis on cultivating both "specialized" and "interdisci-plinary"talents with professional skills.

  17. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  18. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  19. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  20. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  1. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  2. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  3. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  4. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  5. Wood for sound.

    Science.gov (United States)

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  6. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  7. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  8. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  9. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  10. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  11. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  12. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  13. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  14. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  15. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  16. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  17. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  18. Onboard Acoustic Recording from Diving Elephant Seals

    National Research Council Canada - National Science Library

    Fletcher, Stacia

    1996-01-01

    The aim of this project was to record sounds impinging on free-ranging northern elephant seals, Mirounga angustirostris, a first step in determining the importance of LFS to these animals as they dive...

  19. Magnetospheric radio sounding

    International Nuclear Information System (INIS)

    Ondoh, Tadanori; Nakamura, Yoshikatsu; Koseki, Teruo; Watanabe, Sigeaki; Murakami, Toshimitsu

    1977-01-01

    Radio sounding of the plasmapause from a geostationary satellite has been investigated to observe time variations of the plasmapause structure and effects of the plasma convection. In the equatorial plane, the plasmapause is located, on the average, at 4 R sub(E) (R sub(E); Earth radius), and the plasma density drops outwards from 10 2 -10 3 /cm 3 to 1-10/cm 3 in the plasmapause width of about 600 km. Plasmagrams showing a relation between the virtual range and sounding frequencies are computed by ray tracing of LF-VLF waves transmitted from a geostationary satellite, using model distributions of the electron density in the vicinity of the plasmapause. The general features of the plasmagrams are similar to the topside ionograms. The plasmagram has no penetration frequency such as f 0 F 2 , but the virtual range of the plasmagram increases rapidly with frequency above 100 kHz, since the distance between a satellite and wave reflection point increases rapidly with increasing the electron density inside the plasmapause. The plasmapause sounder on a geostationary satellite has been designed by taking account of an average propagation distance of 2 x 2.6 R sub(E) between a satellite (6.6 R sub(E)) and the plasmapause (4.0 R sub(E)), background noise, range resolution, power consumption, and receiver S/N of 10 dB. The 13-bit Barker coded pulses of baud length of 0.5 msec should be transmitted in direction parallel to the orbital plane at frequencies for 10 kHz-2MHz in a pulse interval of 0.5 sec. The transmitter peak power of 70 watts and 700 watts are required respectively in geomagnetically quiet and disturbed (strong nonthermal continuum emissions) conditions for a 400 meter cylindrical dipole of 1.2 cm diameter on the geostationary satellite. This technique will open new area of radio sounding in the magnetosphere. (auth.)

  20. Records Management And Private Sector Organizations | Mnjama ...

    African Journals Online (AJOL)

    This article begins by examining the role of records management in private organizations. It identifies the major reason why organizations ought to manage their records effectively and efficiently. Its major emphasis is that a sound records management programme is a pre-requisite to quality management system programme ...

  1. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  2. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audience´s...... interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  3. JINGLE: THE SOUNDING SYMBOL

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2013-12-01

    Full Text Available The article considers the role of jingles in the industrial era, from the occurrence of the regular radio broadcasting, sound films and television up of modern video games, audio and video podcasts, online broadcasts, and mobile communications. Jingles are researched from the point of view of the theory of symbols: the forward motion is detected in the process of development of jingles from the social symbols (radio callsigns to the individual signs-images (ringtones. The role of technical progress in the formation of jingles as important cultural audio elements of modern digital civilization.

  4. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  5. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  6. Sounds like Team Spirit

    Science.gov (United States)

    Hoffman, Edward

    2002-01-01

    I recently accompanied my son Dan to one of his guitar lessons. As I sat in a separate room, I focused on the music he was playing and the beautiful, robust sound that comes from a well-played guitar. Later that night, I woke up around 3 am. I tend to have my best thoughts at this hour. The trouble is I usually roll over and fall back asleep. This time I was still awake an hour later, so I got up and jotted some notes down in my study. I was thinking about the pure, honest sound of a well-played instrument. From there my mind wandered into the realm of high-performance teams and successful projects. (I know this sounds weird, but this is the sort of thing I think about at 3 am. Maybe you have your own weird thoughts around that time.) Consider a team in relation to music. It seems to me that a crack team can achieve a beautiful, perfect unity in the same way that a band of brilliant musicians can when they're in harmony with one another. With more than a little satisfaction I have to admit, I started to think about the great work performed for you by the Knowledge Sharing team, including this magazine you are reading. Over the past two years I personally have received some of my greatest pleasures as the APPL Director from the Knowledge Sharing activities - the Masters Forums, NASA Center visits, ASK Magazine. The Knowledge Sharing team expresses such passion for their work, just like great musicians convey their passion in the music they play. In the case of Knowledge Sharing, there are many factors that have made this so enjoyable (and hopefully worthwhile for NASA). Three ingredients come to mind -- ingredients that have produced a signature sound. First, through the crazy, passionate playing of Alex Laufer, Michelle Collins, Denise Lee, and Todd Post, I always know that something startling and original is going to come out of their activities. This team has consistently done things that are unique and innovative. For me, best of all is that they are always

  7. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  8. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  9. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  10. Jordan Banks Financial Soundness Indicators

    Directory of Open Access Journals (Sweden)

    Imad Kutum

    2015-09-01

    Full Text Available The aim of this research paper is to examine the Jordanian banks using financial soundness indicators. This is to establish if Jordanian banks were affected because of the 2007/2008 financial crisis and determine the underlying reasons. The research paper was conducted on 25 banks in Jordan listed in the countries securities exchange. The research methodology used consisted of examining the banks financial records in order to derive four crucial Basel III ratio such as the capital adequacy ratio, the leverage ratio, the liquidity ratio and finally the Total Provisions (As % Of Non-Performing Loans %. The results revealed that out of the four hypotheses under examination Jordan Banks do not meet Basel financial Indicators for Capital Adequacy Ratio, Jordan Banks does not meet Basel financial Indicators for Liquidity Ratio , Jordan Banks do not meet Basel financial Indicators for Leverage Ratio and Jordan Banks do not meet Basel financial Indicators for Total Provisions (As % Of Non-Performing Loans ratio. Only one hypothesis was accepted based on the research outcomes. The rest of the hypothesis was rejected since the average trend line did not go below the Basel III required ratio level. The general outcome of the research revealed that Jordanian banks were not affected significantly by the financial crisis.

  11. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  12. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  13. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  14. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  15. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  16. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  17. Sound, memory and interruption

    DEFF Research Database (Denmark)

    Pinder, David

    2016-01-01

    This chapter considers how art can interrupt the times and spaces of urban development so they might be imagined, experienced and understood differently. It focuses on the construction of the M11 Link Road through north-east London during the 1990s that demolished hundreds of homes and displaced...... around a thousand people. The highway was strongly resisted and it became the site of one of the country’s longest and largest anti-road struggles. The chapter addresses specifically Graeme Miller’s sound walk LINKED (2003), which for more than a decade has been broadcasting memories and stories...... of people who were violently displaced by the road as well as those who actively sought to halt it. Attention is given to the walk’s interruption of senses of the given and inevitable in two main ways. The first is in relation to the pace of the work and its deployment of slowness and arrest in a context...

  18. Recycling Sounds in Commercials

    DEFF Research Database (Denmark)

    Larsen, Charlotte Rørdam

    2012-01-01

    Commercials offer the opportunity for intergenerational memory and impinge on cultural memory. TV commercials for foodstuffs often make reference to past times as a way of authenticating products. This is frequently achieved using visual cues, but in this paper I would like to demonstrate how...... such references to the past and ‘the good old days’ can be achieved through sounds. In particular, I will look at commercials for Danish non-dairy spreads, especially for OMA margarine. These commercials are notable in that they contain a melody and a slogan – ‘Say the name: OMA margarine’ – that have basically...... remained the same for 70 years. Together these identifiers make OMA an interesting Danish case to study. With reference to Ann Rigney’s memorial practices or mechanisms, the study aims to demonstrate how the auditory aspects of Danish margarine commercials for frying tend to be limited in variety...

  19. The sounds of science

    Science.gov (United States)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  20. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  1. Early Morphology and Recurring Sound Patterns

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Basbøll, Hans; Lambertsen, Claus

    Corpus is a longitudinal corpus of spontaneous Child Speech and Child Directed Speech recorded in the children's homes in interaction with their parents or caretaker and transcribed in CHILDES (MacWhinney 2007 a, b), supplemented by parts of Kim Plunkett's Danish corpus (CHILDES) (Plunkett 1985, 1986...... in creating the typologically characteristic syllable structure of Danish with extreme sound reductions (Rischel 2003, Basbøll 2005) presenting a challenge to the language acquiring child (Bleses & Basbøll 2004). Building upon the Danish CDI-studies as well as on the Odense Twin Corpus and experimental data...

  2. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  3. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  4. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    2010-01-01

    The aim of this article is to shed light on a small part of the research taking place in the textile field. The article describes an ongoing PhD research project on textiles and sound and outlines the project's two main questions: how sound can be shaped by textiles and conversely how textiles can...

  5. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  6. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  7. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  8. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  9. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  10. Wearable Eating Habit Sensing System Using Internal Body Sound

    Science.gov (United States)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  11. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  12. Beaches and Bluffs of Puget Sound and the Northern Straits

    Science.gov (United States)

    2007-04-01

    sand up to pebbles, cobbles, and occasionally boulders, often also containing shelly material. Puget Sound beaches commonly have two distinct...very limited historic wind records (wave hind- casting ). Drift directions indicated in the Atlas have repeatedly been proven inaccurate (Johannessen

  13. Evoked responses to sinusoidally modulated sound in unanaesthetized dogs

    NARCIS (Netherlands)

    Tielen, A.M.; Kamp, A.; Lopes da Silva, F.H.; Reneau, J.P.; Storm van Leeuwen, W.

    1. 1. Responses evoked by sinusoidally amplitude-modulated sound in unanaesthetized dogs have been recorded from inferior colliculus and from auditory cortex structures by means of chronically indwelling stainless steel wire electrodes. 2. 2. Harmonic analysis of the average responses demonstrated

  14. The effect of sound sources on soundscape appraisal

    NARCIS (Netherlands)

    van den Bosch, Kirsten; Andringa, Tjeerd

    2014-01-01

    In this paper we explore how the perception of sound sources (like traffic, birds, and the presence of distant people) influences the appraisal of soundscapes (as calm, lively, chaotic, or boring). We have used 60 one-minute recordings, selected from 21 days (502 hours) in March and July 2010.

  15. Completely reproducible description of digital sound data with cellular automata

    International Nuclear Information System (INIS)

    Wada, Masato; Kuroiwa, Jousuke; Nara, Shigetoshi

    2002-01-01

    A novel method of compressive and completely reproducible description of digital sound data by means of rule dynamics of CA (cellular automata) is proposed. The digital data of spoken words and music recorded with the standard format of a compact disk are reproduced completely by this method with use of only two rules in a one-dimensional CA without loss of information

  16. Lung function interpolation by analysis of means of neural-network-supported respiration sounds

    NARCIS (Netherlands)

    Oud, M

    Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages

  17. Understanding the Doppler Effect by Analysing Spectrograms of the Sound of a Passing Vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-01-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a…

  18. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    Science.gov (United States)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  19. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  20. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  1. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    Science.gov (United States)

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  2. An Exceptional Purity of Sound: Noise Reduction Technology and the Inevitable Noise of Sound Recording

    NARCIS (Netherlands)

    Kromhout, M.

    2014-01-01

    The phenomenon of noise has resisted many attempts at framing it within a singular conceptual framework. Critically questioning the tendency to do so, this article asserts the complexities of different noise-phenomena by analysing a specific technology: technological noise reduction systems. Whereas

  3. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  4. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  5. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  6. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  7. An Integrated Approach to Motion and Sound

    National Research Council Canada - National Science Library

    Hahn, James K; Geigel, Joe; Lee, Jong W; Gritz, Larry; Takala, Tapio; Mishra, Suneil

    1995-01-01

    Until recently, sound has been given little attention in computer graphics and related domains of computer animation and virtual environments, although sounds which are properly synchronized to motion...

  8. THE INTONATION AND SOUND CHARACTERISTICS OF ADVERTISING PRONUNCIATION STYLE

    Directory of Open Access Journals (Sweden)

    Chernyavskaya Elena Sergeevna

    2014-06-01

    Full Text Available The article aims at describing the intonation and sound characteristics of advertising phonetic style. On the basis of acoustic analysis of transcripts of radio advertising tape recordings, broadcasted at different radio stations, as well as in the result of processing the representative part of phrases with the help of special computer programs, the author determines the parameters of superfix means. The article proves that the stylistic parameters of advertising phonetic style are oriented on modern orthoepy, and that the originality of radio advertising sounding is determined by two tendencies – the reduction of stressed vowels duration in the terminal and non-terminal word and the increase of pre-tonic and post-tonic vowels duration of non-terminal word in a phrase. The article also shows that the peculiarity of rhythmic structure of terminal and non-terminal word in radio advertising is formed by means of leveling stressed and unstressed sounds in length. The specificity of intonational structure of an advertising text consists in the following peculiarities: matching of syntactic and syntagmatic division, which allows to denote the blocks of semantic models, forming the text of radio advertising; the allocation of keywords into separate syntagmas; the design of informative parts of advertising text by means of symmetric length correlation of minimal speech segments; the combination of interstyle prosodic elements in the framework of sounding text. Thus, the conducted analysis allowed to conclude, that the texts of sounding advertising are designed using special pronunciation style, marked by sound duration.

  9. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  10. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  11. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  12. Home recording for musicians for dummies

    CERN Document Server

    Strong, Jeff

    2008-01-01

    Invaluable advice that will be music to your ears! Are you thinking of getting started in home recording? Do you want to know the latest home recording technologies? Home Recording For Musicians For Dummies will get you recording music at home in no time. It shows you how to set up a home studio, record and edit your music, master it, and even distribute your songs. With this guide, you?ll learn how to compare studio-in-a-box, computer-based, and stand-alone recording systems and choose what you need. You?ll gain the skills to manage your sound, take full advantage of MIDI, m

  13. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  14. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  15. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.

    Science.gov (United States)

    Ching, Siok Siong; Tan, Yih Kai

    2012-09-07

    To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen

  16. Combined Amplification and Sound Generation for Tinnitus: A Scoping Review.

    Science.gov (United States)

    Tutaj, Lindsey; Hoare, Derek J; Sereda, Magdalena

    In most cases, tinnitus is accompanied by some degree of hearing loss. Current tinnitus management guidelines recognize the importance of addressing hearing difficulties, with hearing aids being a common option. Sound therapy is the preferred mode of audiological tinnitus management in many countries, including in the United Kingdom. Combination instruments provide a further option for those with an aidable hearing loss, as they combine amplification with a sound generation option. The aims of this scoping review were to catalog the existing body of evidence on combined amplification and sound generation for tinnitus and consider opportunities for further research or evidence synthesis. A scoping review is a rigorous way to identify and review an established body of knowledge in the field for suggestive but not definitive findings and gaps in current knowledge. A wide variety of databases were used to ensure that all relevant records within the scope of this review were captured, including gray literature, conference proceedings, dissertations and theses, and peer-reviewed articles. Data were gathered using scoping review methodology and consisted of the following steps: (1) identifying potentially relevant records; (2) selecting relevant records; (3) extracting data; and (4) collating, summarizing, and reporting results. Searches using 20 different databases covered peer-reviewed and gray literature and returned 5959 records. After exclusion of duplicates and works that were out of scope, 89 records remained for further analysis. A large number of records identified varied considerably in methodology, applied management programs, and type of devices. There were significant differences in practice between different countries and clinics regarding candidature and fitting of combination aids, partly driven by the application of different management programs. Further studies on the use and effects of combined amplification and sound generation for tinnitus are

  17. Sounds of a Star

    Science.gov (United States)

    2001-06-01

    Acoustic Oscillations in Solar-Twin "Alpha Cen A" Observed from La Silla by Swiss Team Summary Sound waves running through a star can help astronomers reveal its inner properties. This particular branch of modern astrophysics is known as "asteroseismology" . In the case of our Sun, the brightest star in the sky, such waves have been observed since some time, and have greatly improved our knowledge about what is going on inside. However, because they are much fainter, it has turned out to be very difficult to detect similar waves in other stars. Nevertheless, tiny oscillations in a solar-twin star have now been unambiguously detected by Swiss astronomers François Bouchy and Fabien Carrier from the Geneva Observatory, using the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory. This telescope is mostly used for discovering exoplanets (see ESO PR 07/01 ). The star Alpha Centauri A is the nearest star visible to the naked eye, at a distance of a little more than 4 light-years. The new measurements show that it pulsates with a 7-minute cycle, very similar to what is observed in the Sun . Asteroseismology for Sun-like stars is likely to become an important probe of stellar theory in the near future. The state-of-the-art HARPS spectrograph , to be mounted on the ESO 3.6-m telescope at La Silla, will be able to search for oscillations in stars that are 100 times fainter than those for which such demanding observations are possible with CORALIE. PR Photo 23a/01 : Oscillations in a solar-like star (schematic picture). PR Photo 23b/01 : Acoustic spectrum of Alpha Centauri A , as observed with CORALIE. Asteroseismology: listening to the stars ESO PR Photo 23a/01 ESO PR Photo 23a/01 [Preview - JPEG: 357 x 400 pix - 96k] [Normal - JPEG: 713 x 800 pix - 256k] [HiRes - JPEG: 2673 x 3000 pix - 2.1Mb Caption : PR Photo 23a/01 is a graphical representation of resonating acoustic waves in the interior of a solar-like star. Red and blue

  18. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  19. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  20. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  1. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  2. Subjective evaluation of restaurant acoustics in a virtual sound environment

    DEFF Research Database (Denmark)

    Nielsen, Nicolaj Østergaard; Marschall, Marton; Santurette, Sébastien

    2016-01-01

    Many restaurants have smooth rigid surfaces made of wood, steel, glass, and concrete. This often results in a lack of sound absorption. Such restaurants are notorious for high sound noise levels during service that most owners actually desire for representing vibrant eating environments, although...... surveys report that noise complaints are on par with poor service. This study investigated the relation between objective acoustic parameters and subjective evaluation of acoustic comfort at five restaurants in terms of three parameters: noise annoyance, speech intelligibility, and privacy. At each...... location, customers filled out questionnaire surveys, acoustic parameters were measured, and recordings of restaurant acoustic scenes were obtained with a 64-channel spherical array. The acoustic scenes were reproduced in a virtual sound environment (VSE) with 64 loudspeakers placed in an anechoic room...

  3. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  5. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  6. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  7. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  8. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  9. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    Directory of Open Access Journals (Sweden)

    Loes J Bolle

    Full Text Available In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2 (zero to peak pressures up to 32 kPa and single pulse sound exposure levels up to 186 dB re 1µPa(2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  10. Ultrahromatizm as a Sound Meditation

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-08-01

    Full Text Available The article scientifically substantiates the insights on the theory and the practice of using microchromatic in modern musical art, defines compositional and expressive possibilities of microtonal system in the works of composers of XXI century. It justifies the author's interpretation of the concept of “ultrahromatizm”, as a principle of musical thinking, which is connected with the sound space conception as the space-time continuum. The paper identifies the correlation of the notions “microchromatism” and “ultrahromatizm”. If microchromosome is understood, first and for most, as the technique of dividing the sound into microparticles, ultrahromatizm is interpreted as the principle of musical and artistic consciousness, as the musical focus of consciousness on the formation of the specific model of sound meditation and understanding of the world.

  11. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  12. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  13. Urban Noise Recorded by Stationary Monitoring Stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir

    2017-10-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.

  14. Making sound vortices by metasurfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Liping; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang; Tang, Kun; Ke, Manzhu; Peng, Shasha [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Jia, Han [State Key Laboratory of Acoustics and Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190 (China); Liu, Zhengyou [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Institute for Advanced Studies, Wuhan University, Wuhan 430072 (China)

    2016-08-15

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  15. Antenna for Ultrawideband Channel Sounding

    DEFF Research Database (Denmark)

    Zhekov, Stanislav Stefanov; Tatomirescu, Alexandru; Pedersen, Gert F.

    2016-01-01

    A novel compact antenna for ultrawideband channel sounding is presented. The antenna is composed of a symmetrical biconical antenna modified by adding a cylinder and a ring to each cone. A feeding coaxial cable is employed during the simulations in order to evaluate and reduce its impact on the a......A novel compact antenna for ultrawideband channel sounding is presented. The antenna is composed of a symmetrical biconical antenna modified by adding a cylinder and a ring to each cone. A feeding coaxial cable is employed during the simulations in order to evaluate and reduce its impact...

  16. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  17. The Multisensory Sound Lab: Sounds You Can See and Feel.

    Science.gov (United States)

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  18. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  19. Offshore dredger sounds: Source levels, sound maps, and risk assessment

    NARCIS (Netherlands)

    Jong, C.A.F. de; Ainslie, M.A.; Heinis, F.; Janmaat, J.

    2016-01-01

    The underwater sound produced during construction of the Port of Rotterdam harbor extension (Maasvlakte 2) was measured, with emphasis on the contribution of the trailing suction hopper dredgers during their various activities: dredging, transport, and discharge of sediment. Measured source levels

  20. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  1. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  2. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  3. The Flooding of Long Island Sound

    Science.gov (United States)

    Thomas, E.; Varekamp, J. C.; Lewis, R. S.

    2007-12-01

    Between the Last Glacial Maximum (22-19 ka) and the Holocene (10 ka) regions marginal to the Laurentide Ice Sheets saw complex environmental changes from moraines to lake basins to dry land to estuaries and marginal ocean basins, as a result of the interplay between the topography of moraines formed at the maximum extent and during stages of the retreat of the ice sheet, regional glacial rebound, and global eustatic sea level rise. In New England, the history of deglaciation and relative sea level rise has been studied extensively, and the sequence of events has been documented in detail. The Laurentide Ice Sheet reached its maximum extent (Long Island) at 21.3-20.4 ka according to radiocarbon dating (calibrated ages), 19.0-18.4 ka according to radionuclide dating. Periglacial Lake Connecticut formed behind the moraines in what is now the Long Island Sound Basin. The lake drained through the moraine at its eastern end. Seismic records show that a fluvial system was cut into the exposed lake beds, and a wave-cut unconformity was produced during the marine flooding, which has been inferred to have occurred at about 15.5 ka (Melt Water Pulse 1A) through correlation with dated events on land. Vibracores from eastern Long Island Sound penetrate the unconformity and contain red, varved lake beds overlain by marine grey sands and silts with a dense concentration of oysters in life position above the erosional contact. The marine sediments consist of intertidal to shallow subtidal deposits with oysters, shallow-water foraminifera and litoral diatoms, overlain by somewhat laminated sandy silts, in turn overlain by coarser-grained, sandy to silty sediments with reworked foraminifera and bivalve fragments. The latter may have been deposited in a sand-wave environment as present today at the core locations. We provide direct age control of the transgression with 30 radiocarbon dates on oysters, and compared the ages with those obtained on macrophytes and bulk organic carbon in

  4. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  5. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  6. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  7. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  8. Sound intensity and its measurement

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    1997-01-01

    The paper summarises the basic theory of sound intensity and its measurement and gives an overview of the state of the art with particular emphasis on recent developments in the field. Eighty references are given, most of which to literature published in the past two years. The paper describes...

  9. Sound Stories for General Music

    Science.gov (United States)

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  10. Sound / Märt Milter

    Index Scriptorium Estoniae

    Milter, Märt

    1999-01-01

    Plaatide "Hip Hop Forever. Mixed by Kenny Dope", "Permaculture", Ronnye & Clyde "In Glorious Black and Blue", "E-Z Rollers presents Drumfunk Hooliganz. Liquid Cooled Tunez From The Original Superfly Drum & Bass Generation", Iron Savior "Unification", Peter Thomas Sound Orchestra "Futuremuzik", "Sushi 4004.The Return Of Spectacular Japanese Clubpop"

  11. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  12. Towards an open sound card

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digital-to-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can con...

  13. Sound Probabilistic #SAT with Projection

    Directory of Open Access Journals (Sweden)

    Vladimir Klebanov

    2016-10-01

    Full Text Available We present an improved method for a sound probabilistic estimation of the model count of a boolean formula under projection. The problem solved can be used to encode a variety of quantitative program analyses, such as concerning security of resource consumption. We implement the technique and discuss its application to quantifying information flow in programs.

  14. 76 FR 13025 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2011-03-09

    ... varying royalty rates on a firm's financial viability. In other words, it is an accounting model that... financial health of any particular service when it proposed the rates.''). Dr. Fratrik's notion of a..., Microeconomics: Theory and Applications, (W.W. Norton & Company, 2004) at 296, 407; see also 7/ 28/10 Tr. at 54:2...

  15. 75 FR 56873 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-09-17

    ... 14 (McCrady). The proposed minimum fee is fully recoupable against royalty fees owed and this feature... payments are pursuant to the royalty minimum fee that is the subject of this remand proceeding, 5/18/10 Tr... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 380 [Docket No. 2005-1 CRB DTRA] Digital...

  16. 75 FR 6097 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-02-08

    ... amended by revising paragraph (b) to read as follows: Sec. 380.3 Royalty fees for the public performance... receive a credit in the amount of the minimum fee against any royalty fees payable in the same calendar... against any additional royalty fees payable in the same calendar year. Dated: February 2, 2010. James...

  17. Urban sound energy reduction by means of sound barriers

    Directory of Open Access Journals (Sweden)

    Iordache Vlad

    2018-01-01

    Full Text Available In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  18. Urban sound energy reduction by means of sound barriers

    Science.gov (United States)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  19. Juvenile Pacific Salmon in Puget Sound

    National Research Council Canada - National Science Library

    Fresh, Kurt L

    2006-01-01

    Puget sound salmon (genus Oncorhynchus) spawn in freshwater and feed, grow and mature in marine waters, During their transition from freshwater to saltwater, juvenile salmon occupy nearshore ecosystems in Puget Sound...

  20. Dredged Material Management in Long Island Sound

    Science.gov (United States)

    Information on Western and Central Long Island Sound Dredged Material Disposal Sites including the Dredged Material Management Plan and Regional Dredging Team. Information regarding the Eastern Long Island Sound Selected Site including public meetings.

  1. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    Science.gov (United States)

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Phonemic versus allophonic status modulates early brain responses to language sounds: an MEG/ERF study

    DEFF Research Database (Denmark)

    Nielsen, Andreas Højlund; Gebauer, Line; Mcgregor, William

    allophonic sound contrasts. So far this has only been attested between languages. In the present study we wished to investigate this effect within the same language: Does the same sound contrast that is phonemic in one environment, but allophonic in another, elicit different MMNm responses in native...... ‘that’). This allowed us to manipulate the phonemic/allophonic status of exactly the same sound contrast (/t/-/d/) by presenting it in different immediate phonetic contexts (preceding a vowel (CV) versus following a vowel (VC)), in order to investigate the auditory event-related fields of native Danish...... listeners to a sound contrast that is both phonemic and allophonic within Danish. Methods: Relevant syllables were recorded by a male native Danish speaker. The stimuli were then created by cross-splicing the sounds so that the same vowel [æ] was used for all syllables, and the same [t] and [d] were used...

  3. Phenological Records

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Phenology is the scientific study of periodic biological phenomena, such as flowering, breeding, and migration, in relation to climatic conditions. The few records...

  4. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  5. A Lexical Analysis of Environmental Sound Categories

    Science.gov (United States)

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  6. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  7. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  8. Sounds in one-dimensional superfluid helium

    International Nuclear Information System (INIS)

    Um, C.I.; Kahng, W.H.; Whang, E.H.; Hong, S.K.; Oh, H.G.; George, T.F.

    1989-01-01

    The temperature variations of first-, second-, and third-sound velocity and attenuation coefficients in one-dimensional superfluid helium are evaluated explicitly for very low temperatures and frequencies (ω/sub s/tau 2 , and the ratio of second sound to first sound becomes unity as the temperature decreases to absolute zero

  9. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  10. The Early Years: Becoming Attuned to Sound

    Science.gov (United States)

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  11. Bubbles That Change the Speed of Sound

    Science.gov (United States)

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  12. Sounding rockets explore the ionosphere

    International Nuclear Information System (INIS)

    Mendillo, M.

    1990-01-01

    It is suggested that small, expendable, solid-fuel rockets used to explore ionospheric plasma can offer insight into all the processes and complexities common to space plasma. NASA's sounding rocket program for ionospheric research focuses on the flight of instruments to measure parameters governing the natural state of the ionosphere. Parameters include input functions, such as photons, particles, and composition of the neutral atmosphere; resultant structures, such as electron and ion densities, temperatures and drifts; and emerging signals such as photons and electric and magnetic fields. Systematic study of the aurora is also conducted by these rockets, allowing sampling at relatively high spatial and temporal rates as well as investigation of parameters, such as energetic particle fluxes, not accessible to ground based systems. Recent active experiments in the ionosphere are discussed, and future sounding rocket missions are cited

  13. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  14. Sound Beams with Shockwave Pulses

    Science.gov (United States)

    Enflo, B. O.

    2000-11-01

    The beam equation for a sound beam in a diffusive medium, called the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, has a class of solutions, which are power series in the transverse variable with the terms given by a solution of a generalized Burgers’ equation. A free parameter in this generalized Burgers’ equation can be chosen so that the equation describes an N-wave which does not decay. If the beam source has the form of a spherical cap, then a beam with a preserved shock can be prepared. This is done by satisfying an inequality containing the spherical radius, the N-wave pulse duration, the N-wave pulse amplitude, and the sound velocity in the fluid.

  15. The Sound of Being There

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Nilsson, Niels Christian

    2014-01-01

    The concept “presence”—often defined as the sensation of “being there”—has received increasing attention in the last decades. Out of the many domains of application, presence is particularly relevant in relation to Immersive Virtual Reality (IVR). Despite the growing attention in the concept pres...... to illustrating how sound production and perception relate to the four constituents of the framework: immersion, illusions of place, illusions of plausibility, and virtual body ownership....

  16. Propagation of sound in oceans

    Digital Repository Service at National Institute of Oceanography (India)

    Advilkar, P.J.

    prestigious institute. I am privileged to express my sincere thanks to JRF’s Roshin Sir, Bajish Sir, for training me both practically and theoretically about various techniques, without which my work would not have reached its completion. I am equally... wrote his Mathematical Principles of Natural Philosophy which included the first mathematical treatment of sound. The modern study of underwater acoustics can be considered to have started in early 19 th century. In 1826, on Lake Geneva, the speed...

  17. Operator performance and annunciation sounds

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.T.; Artiss, W.G.

    1997-01-01

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author)

  18. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  19. Operator performance and annunciation sounds

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B K; Bradley, M T; Artiss, W G [Human Factors Practical, Dipper Harbour, NB (Canada)

    1998-12-31

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author) 3 refs.

  20. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  1. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  2. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  3. Computerized Respiratory Sounds: Novel Outcomes for Pulmonary Rehabilitation in COPD.

    Science.gov (United States)

    Jácome, Cristina; Marques, Alda

    2017-02-01

    Computerized respiratory sounds are a simple and noninvasive measure to assess lung function. Nevertheless, their potential to detect changes after pulmonary rehabilitation (PR) is unknown and needs clarification if respiratory acoustics are to be used in clinical practice. Thus, this study investigated the short- and mid-term effects of PR on computerized respiratory sounds in subjects with COPD. Forty-one subjects with COPD completed a 12-week PR program and a 3-month follow-up. Secondary outcome measures included dyspnea, self-reported sputum, FEV 1 , exercise tolerance, self-reported physical activity, health-related quality of life, and peripheral muscle strength. Computerized respiratory sounds, the primary outcomes, were recorded at right/left posterior chest using 2 stethoscopes. Air flow was recorded with a pneumotachograph. Normal respiratory sounds, crackles, and wheezes were analyzed with validated algorithms. There was a significant effect over time in all secondary outcomes, with the exception of FEV 1 and of the impact domain of the St George Respiratory Questionnaire. Inspiratory and expiratory median frequencies of normal respiratory sounds in the 100-300 Hz band were significantly lower immediately (-2.3 Hz [95% CI -4 to -0.7] and -1.9 Hz [95% CI -3.3 to -0.5]) and at 3 months (-2.1 Hz [95% CI -3.6 to -0.7] and -2 Hz [95% CI -3.6 to -0.5]) post-PR. The mean number of expiratory crackles (-0.8, 95% CI -1.3 to -0.3) and inspiratory wheeze occupation rate (median 5.9 vs 0) were significantly lower immediately post-PR. Computerized respiratory sounds were sensitive to short- and mid-term effects of PR in subjects with COPD. These findings are encouraging for the clinical use of respiratory acoustics. Future research is needed to strengthen these findings and explore the potential of computerized respiratory sounds to assess the effectiveness of other clinical interventions in COPD. Copyright © 2017 by Daedalus Enterprises.

  4. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  5. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  6. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  7. Atypical pattern of discriminating sound features in adults with Asperger syndrome as reflected by the mismatch negativity.

    Science.gov (United States)

    Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R

    2007-04-01

    Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.

  8. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    Science.gov (United States)

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  9. Effect of the radiofrequency volumetric tissue reduction of inferior turbinate on expiratory nasal sound frequency.

    Science.gov (United States)

    Seren, Erdal

    2009-01-01

    We sought to evaluate the short-term efficacy of radiofrequency volumetric tissue reduction (RFVTR) in treatment of inferior turbinate hypertrophy (TH) as measured by expiratory nasal sound spectra. In our study, we aimed to investigate the Odiosoft-rhino (OR) as a new diagnostic method to evaluate the nasal airflow of patients before and after RFVTR. In this study, we have analyzed and recorded the expiratory nasal sound in patients with inferior TH before and after RFVTR. This analysis includes the time expanded waveform, the spectral analysis with time averaged fast Fourier transform (FFT), and the waveform analysis of nasal sound. We found an increase in sound intensity at high frequency (Hf) in the sound analyses of the patients before RFVTR and a decrease in sound intensity at Hf was found in patients after RFVTR. This study indicates that RFVTR is an effective procedure to improve nasal airflow in the patients with nasal obstruction with inferior TH. We found significant decreases in the sound intensity level at Hf in the sound spectra after RFVTR. The OR results from the 2000- to 4000-Hz frequency (Hf) interval may be more useful in assessing patients with nasal obstruction than other frequency intervals. OR may be used as a noninvasive diagnostic tool to evaluate the nasal airflow.

  10. Towards a more sonically inclusive museum practice: a new definition of the ‘sound object’

    Directory of Open Access Journals (Sweden)

    John Kannenberg

    2017-11-01

    Full Text Available As museums continue to search for new ways to attract visitors, recent trends within museum practice have focused on providing audiences with multisensory experiences. Books such as 2014’s The Multisensory Museum present preliminary strategies by which museums might help visitors engage with collections using senses beyond the visual. In this article, an overview of the multisensory roots of museum display and an exploration of the shifting definition of ‘object’ leads to a discussion of Pierre Schaeffer’s musical term objet sonore – the ‘sound object’, which has traditionally stood for recorded sounds on magnetic tape used as source material for electroacoustic musical composition. A problematic term within sound studies, this article proposes a revised definition of ‘sound object’, shifting it from experimental music into the realm of the author’s own experimental curatorial practice of establishing The Museum of Portable Sound, an institution dedicated to the collection and display of sounds as cultural objects. Utilising Brian Kane’s critique of Schaeffer, Christoph Cox and Casey O’Callaghan’s thoughts on sonic materialism, Dan Novak and Matt Sakakeeny’s anthropological approach to sound theory, and art historian Alexander Nagel’s thoughts on the origins of art forgery, this article presents a new working definition of the sound object as a museological (rather than a musical concept.

  11. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  12. RECORDS REACHING RECORDING DATA TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    G. W. L. Gresik

    2013-07-01

    Full Text Available The goal of RECORDS (Reaching Recording Data Technologies is the digital capturing of buildings and cultural heritage objects in hard-to-reach areas and the combination of data. It is achieved by using a modified crane from film industry, which is able to carry different measuring systems. The low-vibration measurement should be guaranteed by a gyroscopic controlled advice that has been , developed for the project. The data were achieved by using digital photography, UV-fluorescence photography, infrared reflectography, infrared thermography and shearography. Also a terrestrial 3D laser scanner and a light stripe topography scanner have been used The combination of the recorded data should ensure a complementary analysis of monuments and buildings.

  13. Records Reaching Recording Data Technologies

    Science.gov (United States)

    Gresik, G. W. L.; Siebe, S.; Drewello, R.

    2013-07-01

    The goal of RECORDS (Reaching Recording Data Technologies) is the digital capturing of buildings and cultural heritage objects in hard-to-reach areas and the combination of data. It is achieved by using a modified crane from film industry, which is able to carry different measuring systems. The low-vibration measurement should be guaranteed by a gyroscopic controlled advice that has been , developed for the project. The data were achieved by using digital photography, UV-fluorescence photography, infrared reflectography, infrared thermography and shearography. Also a terrestrial 3D laser scanner and a light stripe topography scanner have been used The combination of the recorded data should ensure a complementary analysis of monuments and buildings.

  14. Making magnetic recording commercial: 1920-1955

    Science.gov (United States)

    Clark, Mark H.

    1999-03-01

    Although magnetic recording had been invented in 1898, it was not until the late 1920s that the technology was successfully marketed to the public. Firms in Germany, the United Kingdom, and the United States developed and sold magnetic recorders for specialized markets in broadcasting and telephone systems through the 1930s. The demands of World War II considerably expanded the use of magnetic recording, and with the end of the war, firms in the United States sought to bring magnetic recording to home and professional music recording. Using a combination of captured German technology and American wartime research, American companies such as Ampex, Magnecord, 3M, the Brush Development Company, and others created a vast new industry. By the mid-1950s, magnetic recording was firmly established as a method for recording both sound and data.

  15. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    Science.gov (United States)

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    Bowel dysfunction remains a major problem in neonates. Traditional auscultation of bowel sounds as a diagnostic aid in neonatal gastrointestinal complications is limited by skill and inability to document and reassess. Consequently, we built a unique prototype to investigate the feasibility of an electronic monitoring system for continuous assessment of bowel sounds. We attained approval by the Institutional Review Boards for the investigational study to test our system. The system incorporated a prototype stethoscope head with a built-in microphone connected to a digital recorder. Recordings made over extended periods were evaluated for quality. We also considered the acoustic environment of the hospital, where the stethoscope was used. The stethoscope head was attached to the abdomen with a hydrogel patch designed especially for this purpose. We used the system to obtain recordings from eight healthy, full-term babies. A scoring system was used to determine loudness, clarity, and ease of recognition comparing it to the traditional stethoscope. The recording duration was initially two hours and was increased to a maximum of eight hours. Median duration of attachment was three hours (3.75, 2.68). Based on the scoring, the bowel sound recording was perceived to be as loud and clear in sound reproduction as a traditional stethoscope. We determined that room noise and other noises were significant forms of interference in the recordings, which at times prevented analysis. However, no sound quality drift was noted in the recordings and no patient discomfort was noted. Minimal erythema was observed over the fixation site which subsided within one hour. We demonstrated the long-term recording of infant bowel sounds. Our contributions included a prototype stethoscope head, which was affixed using a specially designed hydrogel adhesive patch. Such a recording can be reviewed and reassessed, which is new technology and an improvement over current practice. The use of this

  16. Lymphocytes on sounding rocket flights.

    Science.gov (United States)

    Cogoli-Greuter, M; Pippia, P; Sciola, L; Cogoli, A

    1994-05-01

    Cell-cell interactions and the formation of cell aggregates are important events in the mitogen-induced lymphocyte activation. The fact that the formation of cell aggregates is only slightly reduced in microgravity suggests that cells are moving and interacting also in space, but direct evidence was still lacking. Here we report on two experiments carried out on a flight of the sounding rocket MAXUS 1B, launched in November 1992 from the base of Esrange in Sweden. The rocket reached the altitude of 716 km and provided 12.5 min of microgravity conditions.

  17. Consort 1 sounding rocket flight

    Science.gov (United States)

    Wessling, Francis C.; Maybee, George W.

    1989-01-01

    This paper describes a payload of six experiments developed for a 7-min microgravity flight aboard a sounding rocket Consort 1, in order to investigate the effects of low gravity on certain material processes. The experiments in question were designed to test the effect of microgravity on the demixing of aqueous polymer two-phase systems, the electrodeposition process, the production of elastomer-modified epoxy resins, the foam formation process and the characteristics of foam, the material dispersion, and metal sintering. The apparatuses designed for these experiments are examined, and the rocket-payload integration and operations are discussed.

  18. The Swedish sounding rocket programme

    International Nuclear Information System (INIS)

    Bostroem, R.

    1980-01-01

    Within the Swedish Sounding Rocket Program the scientific groups perform experimental studies of magnetospheric and ionospheric physics, upper atmosphere physics, astrophysics, and material sciences in zero g. New projects are planned for studies of auroral electrodynamics using high altitude rockets, investigations of noctilucent clouds, and active release experiments. These will require increased technical capabilities with respect to payload design, rocket performance and ground support as compared with the current program. Coordination with EISCAT and the planned Viking satellite is essential for the future projects. (Auth.)

  19. Sound is Multi-Dimensional

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2006-01-01

    First part of this work examines the concept of musical parameter theory and discusses its methodical use. Second part is an annotated catalogue of 33 different students' compositions, presented in their totality with English translations, created between 1985 and 2006 as part of the subject...... Intuitive Music at Music Therapy, AAU. 20 of these have sound files as well. The work thus serves as an anthology of this form of composition. All the compositions are systematically presented according to parameters: pitch, duration, dynamics, timbre, density, pulse-no pulse, tempo, stylistic...

  20. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes......, as well as overall preference, was based on consistency tests of binary paired-comparison judgments and on modeling the choice frequencies using probabilistic choice models. As a result, the preferences of non-expert listeners could be measured reliably at a ratio scale level. Principal components derived...

  1. Developing a reference of normal lung sounds in healthy Peruvian children.

    Science.gov (United States)

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  2. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    Science.gov (United States)

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  3. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Evaluation of Sound Quality, Boominess and Boxiness in Small Rooms

    DEFF Research Database (Denmark)

    Weisser, Adam; Rindel, Jens Holger

    2006-01-01

    ratings. The classical bass ratio definitions showed poor correlation with all subjective ratings. The overall sound quality ratings gave different results for speech and music. For speech the preferred mean RT should be as low as possible, whereas for music there was found a preferred range between 0......The acoustics of small rooms has been studied with emphasis on sound quality, boominess and boxiness when the rooms are used for speech or music. Seven rooms with very different characteristics have been used for the study. Subjective listening tests were made using binaural recordings...... of reproduced speech and music. The test results were compared with a large number of objective acoustic parameters based on the frequency-dependent reverberation times measured in the rooms. This has led to the proposal of three new acoustic parameters, which have shown high correlation with the subjective...

  5. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  6. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  7. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  8. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  9. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  10. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  11. First and second sound in He films

    International Nuclear Information System (INIS)

    Oh, H.G.; Um, C.I.; Kahng, W.H.; Isihara, A.

    1986-01-01

    In consideration of a collision integral in the Boltzmann equation and with use of kinetic and hydrodynamical equations, the velocities of the first and second sound in liquid 4 He films are evaluated as functions of temperature, and the attenuation coefficients are obtained. The second sound is 2/sup -1/2/ times the first-sound velocity in the low-temperature and low-frequency limit

  12. Visualizing Sound Directivity via Smartphone Sensors

    OpenAIRE

    Hawley, Scott H.; McClain Jr, Robert E.

    2017-01-01

    We present a fast, simple method for automated data acquisition and visualization of sound directivity, made convenient and accessible via a smartphone app, "Polar Pattern Plotter." The app synchronizes measurements of sound volume with the phone's angular orientation obtained from either compass, gyroscope or accelerometer sensors and produces a graph and exportable data file. It is generalizable to various sound sources and receivers via the use of an input-jack-adaptor to supplant the smar...

  13. Improving Sound Systems by Electrical Means

    OpenAIRE

    Schneider, Henrik; Andersen, Michael A. E.; Knott, Arnold

    2015-01-01

    The availability and flexibility of audio services on various digital platforms have created a high demand for a large range of sound systems. The fundamental components of sound systems such as docking stations, sound bars and wireless mobile speakers consists of a power supply, amplifiers and transducers. Due to historical reasons the design of each of these components are commonly handled separately which are indeed limiting the full performance potential of such systems. To state some exa...

  14. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  15. Summary Record

    International Nuclear Information System (INIS)

    2008-01-01

    The first workshop of the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT) was held on 4 October 2004. The workshop was hosted by the Japan Nuclear Energy Safety (JNES) Organization. The BFBT Benchmark is sponsored by the US Nuclear Regulatory Commission (NRC), the OECD, and the Nuclear Engineering Program (NEP) of the Pennsylvania State University. The experimental data was produced during a measurement campaign by the NUPEC, Japan, and sponsored by the Japan Ministry of Economy, Trade and Industry (METI). This international benchmark, based on the NUPEC database, encourages advancement in this un-investigated field of two-phase flow theory with very important relevance to the nuclear reactor's safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models on the prediction of detailed void distributions and critical powers. Furthermore, the following points are kept in mind while the benchmark specification is being established. As concerns the numerical model of void distributions, no sound theoretical approach that can be applied to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have improved tremendously through the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction is currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark consists of two parts (phases), each part consisting of different exercises: - Phase 1 Void Distribution Benchmark: Exercise 1: Steady-state sub

  16. Vinyl Record

    DEFF Research Database (Denmark)

    Bartmanski, Dominik; Woodward, Ian

    2018-01-01

    . This relational process means that both the material affordances and entanglements of vinyl allow us to feel, handle, experience, project, and share its iconicity. The materially mediated meanings of vinyl enabled it to retain currency in independent and collector’s markets and thus resist the planned......In this paper, we use the case of the vinyl record to show that iconic objects become meaningful via a dual process. First, they offer immersive engagements which structure user interpretations through various material experiences of handling, use, and extension. Second, they always work via...

  17. A sound worth saving: acoustic characteristics of a massive fish spawning aggregation.

    Science.gov (United States)

    Erisman, Brad E; Rowell, Timothy J

    2017-12-01

    Group choruses of marine animals can produce extraordinarily loud sounds that markedly elevate levels of the ambient soundscape. We investigated sound production in the Gulf corvina ( Cynoscion othonopterus ), a soniferous marine fish with a unique reproductive behaviour threatened by overfishing, to compare with sounds produced by other marine animals. We coupled echosounder and hydrophone surveys to estimate the magnitude of the aggregation and sounds produced during spawning. We characterized individual calls and documented changes in the soundscape generated by the presence of as many as 1.5 million corvina within a spawning aggregation spanning distances up to 27 km. We show that calls by male corvina represent the loudest sounds recorded in a marine fish, and the spatio-temporal magnitude of their collective choruses are among the loudest animal sounds recorded in aquatic environments. While this wildlife spectacle is at great risk of disappearing due to overfishing, regional conservation efforts are focused on other endangered marine animals. © 2017 The Author(s).

  18. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  19. Understanding and crafting the mix the art of recording

    CERN Document Server

    Moylan, William

    2014-01-01

    Understanding and Crafting the Mix, 3rd edition provides the framework to identify, evaluate, and shape your recordings with clear and systematic methods. Featuring numerous exercises, this third edition allows you to develop critical listening and analytical skills to gain greater control over the quality of your recordings. Sample production sequences and descriptions of the recording engineer's role as composer, conductor, and performer provide you with a clear view of the entire recording process. Dr. William Moylan takes an inside look into a range of iconic popular music, thus offering insights into making meaningful sound judgments during recording. His unique focus on the aesthetic of recording and mixing will allow you to immediately and artfully apply his expertise while at the mixing desk. A companion website features recorded tracks to use in exercises, reference materials, additional examples of mixes and sound qualities, and mixed tracks.

  20. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  1. High frequency components of tracheal sound are emphasized during prolonged flow limitation

    International Nuclear Information System (INIS)

    Tenhunen, M; Huupponen, E; Saastamoinen, A; Kulkas, A; Himanen, S-L; Rauhala, E

    2009-01-01

    A nasal pressure transducer, which is used to study nocturnal airflow, also provides information about the inspiratory flow waveform. A round flow shape is presented during normal breathing. A flattened, non-round shape is found during hypopneas and it can also appear in prolonged episodes. The significance of this prolonged flow limitation is still not established. A tracheal sound spectrum has been analyzed further in order to achieve additional information about breathing during sleep. Increased sound frequencies over 500 Hz have been connected to obstruction of the upper airway. The aim of the present study was to examine the tracheal sound signal content of prolonged flow limitation and to find out whether prolonged flow limitation would consist of abundant high frequency activity. Sleep recordings of 36 consecutive patients were examined. The tracheal sound spectral analysis was performed on 10 min episodes of prolonged flow limitation, normal breathing and periodic apnea-hypopnea breathing. The highest total spectral amplitude, implicating loudest sounds, occurred during flow-limited breathing which also presented loudest sounds in all frequency bands above 100 Hz. In addition, the tracheal sound signal during flow-limited breathing constituted proportionally more high frequency activities compared to normal breathing and even periodic apnea-hypopnea breathing

  2. Evidence of sound production by spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain

    Science.gov (United States)

    Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.

    2018-01-01

    Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.

  3. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    Directory of Open Access Journals (Sweden)

    Nordahl Rolf

    2010-01-01

    Full Text Available We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users' actions, while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden. A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions, including both uni- and bimodal stimuli (auditory and visual. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show that subjects' motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment.

  4. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  5. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  6. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  7. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    Science.gov (United States)

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  9. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  10. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  11. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club November  Selections Just in time for the holiday season, we have added a number of new CDs and DVDs into the Club. You will find the full lists at http://cern.ch/record.club; select the "Discs of the Month" button on the left side on the left panel of the web page and then Nov 2011. New films include the all 5 episodes of Fast and Furious, many of the most famous films starring Jean-Paul Belmondo and those of Louis de Funes and some more recent films such as The Lincoln Lawyer and, according to some critics, Woody Allen’s best film for years – Midnight in Paris. For the younger generation there is Cars 2 and Kung Fu Panda 2. New CDs include the latest releases by Adele, Coldplay and the Red Hot Chili Peppers. We have also added the new Duets II CD featuring Tony Bennett singing with some of today’s pop stars including Lady Gaga, Amy Winehouse and Willy Nelson. The Club is now open every Monday, Wednesday and Friday ...

  12. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club June Selections We have put a significant number of new CDs and DVDs into the Club You will find the full lists at http://cern.ch/record.club and select the «Discs of the Month» button on the left side on the left panel of the web page and then June 2011. New films include the latest Action, Suspense and Science Fiction film hits, general drama movies including the Oscar-winning The King’s Speech, comedies including both chapter of Bridget Jones’s Diary, seven films for children and a musical. Other highlights include the latest Harry Potter release and some movies from the past you may have missed including the first in the Terminator series. New CDs include the latest releases by Michel Sardou, Mylene Farmer, Jennifer Lopez, Zucchero and Britney Spears. There is also a hits collection from NRJ. Don’t forget that the Club is now open every Monday, Wednesday and Friday lunchtimes from 12h30 to 13h00 in Restaurant 2, Building 504. (C...

  13. Record club

    CERN Document Server

    Record club

    2010-01-01

      Bonjour a tous, Voici les 24 nouveaux DVD de Juillet disponibles depuis quelques jours, sans oublier les 5 CD Pop musique. Découvrez la saga du terroriste Carlos, la vie de Gainsbourg et les aventures de Lucky Luke; angoissez avec Paranormal Activity et évadez vous sur Pandora dans la peau d’Avatar. Toutes les nouveautés sont à découvrir directement au club. Pour en connaître la liste complète ainsi que le reste de la collection du Record Club, nous vous invitons sur notre site web: http://cern.ch/crc. Toutes les dernières nouveautés sont dans la rubrique « Discs of the Month ». Rappel : le club est ouvert les Lundis, Mercredis, Vendredis de 12h30 à 13h00 au restaurant n°2, bâtiment 504. A bientôt chers Record Clubbers.  

  14. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club Nouveautés été 2011 Le club de location de CDs et de DVDs vient d’ajouter un grand nombre de disques pour l’été 2011. Parmi eux, Le Discours d’un Roi, oscar 2011 du meilleur film et Harry Potter les reliques de la mort (1re partie). Ce n’est pas moins de 48 DVDs et 10 CDs nouveaux qui vous sont proposés à la location. Il y en a pour tous les genres. Alors n’hésitez pas à consulter notre site http://cern.ch/record.club, voir Disc Catalogue, Discs of the month pour avoir la liste complète. Le club est ouvert tous les Lundi, Mercredi, Vendredi de 12h30 à 13h dans le bâtiment du restaurent N°2 (Cf. URL: http://www.cern.ch/map/building?bno=504) A très bientôt.  

  15. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  16. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    DEFF Research Database (Denmark)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory......-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared...... with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies...

  17. Four odontocete species change hearing levels when warned of impending loud sound.

    Science.gov (United States)

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  18. Sound velocity of tantalum under shock compression in the 18–142 GPa range

    Energy Technology Data Exchange (ETDEWEB)

    Xi, Feng, E-mail: xifeng@caep.cn; Jin, Ke; Cai, Lingcang, E-mail: cai-lingcang@aliyun.com; Geng, Huayun; Tan, Ye; Li, Jun [National Key Laboratory of Shock Waves and Detonation Physics, Institute of Fluid Physics, CAEP, P.O. Box 919-102 Mianyang, Sichuan 621999 (China)

    2015-05-14

    Dynamic compression experiments of tantalum (Ta) within a shock pressure range from 18–142 GPa were conducted driven by explosive, a two-stage light gas gun, and a powder gun, respectively. The time-resolved Ta/LiF (lithium fluoride) interface velocity profiles were recorded with a displacement interferometer system for any reflector. Sound velocities of Ta were obtained from the peak state time duration measurements with the step-sample technique and the direct-reverse impact technique. The uncertainty of measured sound velocities were analyzed carefully, which suggests that the symmetrical impact method with step-samples is more accurate for sound velocity measurement, and the most important parameter in this type experiment is the accurate sample/window particle velocity profile, especially the accurate peak state time duration. From these carefully analyzed sound velocity data, no evidence of a phase transition was found up to the shock melting pressure of Ta.

  19. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  20. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  1. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  2. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  3. Mississippi Sound Remote Sensing Study

    Science.gov (United States)

    Atwell, B. H.

    1973-01-01

    The Mississippi Sound Remote Sensing Study was initiated as part of the research program of the NASA Earth Resources Laboratory. The objective of this study is development of remote sensing techniques to study near-shore marine waters. Included within this general objective are the following: (1) evaluate existing techniques and instruments used for remote measurement of parameters of interest within these waters; (2) develop methods for interpretation of state-of-the-art remote sensing data which are most meaningful to an understanding of processes taking place within near-shore waters; (3) define hardware development requirements and/or system specifications; (4) develop a system combining data from remote and surface measurements which will most efficiently assess conditions in near-shore waters; (5) conduct projects in coordination with appropriate operating agencies to demonstrate applicability of this research to environmental and economic problems.

  4. Floquet topological insulators for sound

    Science.gov (United States)

    Fleury, Romain; Khanikaev, Alexander B.; Alù, Andrea

    2016-06-01

    The unique conduction properties of condensed matter systems with topological order have recently inspired a quest for the similar effects in classical wave phenomena. Acoustic topological insulators, in particular, hold the promise to revolutionize our ability to control sound, allowing for large isolation in the bulk and broadband one-way transport along their edges, with topological immunity against structural defects and disorder. So far, these fascinating properties have been obtained relying on moving media, which may introduce noise and absorption losses, hindering the practical potential of topological acoustics. Here we overcome these limitations by modulating in time the acoustic properties of a lattice of resonators, introducing the concept of acoustic Floquet topological insulators. We show that acoustic waves provide a fertile ground to apply the anomalous physics of Floquet topological insulators, and demonstrate their relevance for a wide range of acoustic applications, including broadband acoustic isolation and topologically protected, nonreciprocal acoustic emitters.

  5. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  6. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  7. The 2011 marine heat wave in Cockburn Sound, southwest Australia

    Directory of Open Access Journals (Sweden)

    T. H. Rose

    2012-07-01

    Full Text Available Over 2000 km of Western Australian coastline experienced a significant marine heat wave in February and March 2011. Seawater temperature anomalies of +2–4 °C were recorded at a number of locations, and satellite-derived SSTs (sea surface temperatures were the highest on record. Here, we present seawater temperatures from southwestern Australia and describe, in detail, the marine climatology of Cockburn Sound, a large, multiple-use coastal embayment. We compared temperature and dissolved oxygen levels in 2011 with data from routine monitoring conducted from 2002–2010. A significant warming event, 2–4 °C in magnitude, persisted for > 8 weeks, and seawater temperatures at 10 to 20 m depth were significantly higher than those recorded in the previous 9 yr. Dissolved oxygen levels were depressed at most monitoring sites, being ~ 2 mg l−1 lower than usual in early March 2011. Ecological responses to short-term extreme events are poorly understood, but evidence from elsewhere along the Western Australian coastline suggests that the heat wave was associated with high rates of coral bleaching; fish, invertebrate and macroalgae mortalities; and algal blooms. However, there is a paucity of historical information on ecologically-sensitive habitats and taxa in Cockburn Sound, so that formal examinations of biological responses to the heat wave were not possible. The 2011 heat wave provided insights into conditions that may become more prevalent in Cockburn Sound, and elsewhere, if the intensity and frequency of short-term extreme events increases as predicted.

  8. Summary Record

    International Nuclear Information System (INIS)

    2008-01-01

    The third workshop for the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT-3) was held from 26 to 27 April 2006 in Pisa Italy. This international benchmark encourages advancement in this un-investigated field of two-phase flow theory with very important relevance to the nuclear reactors' safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models for the prediction of detailed void distributions and critical powers. Furthermore, the following points were kept in mind while establishing the benchmark specification: As concerns the numerical model of void distributions, no sound theoretical approach applicable to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have tremendously improved though the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction are currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark is composed of two parts (phases), each part consisting of different exercises: - Phase I Void Distribution Benchmark: Exercise 1 (I-1): Steady-state sub-channel grade benchmark; Exercise 2 (I-2): Steady-state microscopic grade benchmark; Exercise 3 (I-3): Transient macroscopic grade benchmark; Exercise 4 (I-4): Uncertainty analysis of the steady state sub-channel benchmark. - Phase II Critical Power Benchmark: Exercise 0 (II-0): Pressure drop benchmark; Exercise 1 (II-1): Steady-state benchmark; Exercise 2 (II-2): Transient benchmark; Exercise 3 (II-3): Uncertainty Analysis

  9. Summary record

    International Nuclear Information System (INIS)

    2009-01-01

    The Sixth workshop for the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT-6) was held on April 27-28 2009 in University Park / State College, PA, USA. This international benchmark encourages advancement in the un-investigated fields of two-phase flow theory with very important relevance to the nuclear reactors' safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models on the prediction of detailed void distributions and critical powers. Furthermore, the following points were kept in mind while establishing the benchmark specification: As concerns the numerical model of void distributions, no sound theoretical approach that can be applied to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have tremendously improved though the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction is currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark is made up of two parts (phases), each part consisting of different exercises: Phase I - Void Distribution Benchmark: Exercise 1 (I-1) - Steady-state sub-channel grade benchmark; Exercise 2 (I-2) - Steady-state microscopic grade benchmark; Exercise 3 (I-3) - Transient macroscopic grade benchmark; Exercise 4 (I-4) - Uncertainty analysis of the steady state sub-channel benchmark. Phase II - Critical Power Benchmark: Exercise 0 (II-0) - Pressure drop benchmark; Exercise 1 (II-1) - Steady-state benchmark; Exercise 2 (II-2) - Transient benchmark

  10. Summary Record

    International Nuclear Information System (INIS)

    2007-01-01

    The fourth workshop for the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT-4) was held on 8 and 9 May 2007 at NEA Headquarters, Issy-les-Moulineaux, France. This international benchmark encourages advancement in this un-investigated field of two-phase flow theory with very important relevance to the nuclear reactors' safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models on the prediction of detailed void distributions and critical powers. Furthermore, the following points were kept in mind while establishing the benchmark specification: As concerns the numerical model of void distributions, no sound theoretical approach that can be applied to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have tremendously improved through the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction is currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark is made up of two parts (phases), each part consisting of different exercises: - Phase I Void Distribution Benchmark: Exercise 1 (I-1): Steady-state sub-channel grade benchmark; Exercise 2 (I-2): Steady-state microscopic grade benchmark; Exercise 3 (I-3): Transient macroscopic grade benchmark; Exercise 4 (I-4): Uncertainty analysis of the steady state sub-channel benchmark. - Phase II Critical Power Benchmark: Exercise 0 (II-0): Pressure drop benchmark; Exercise 1 (II-1): Steady-state benchmark; Exercise 2 (II-2): Transient benchmark

  11. Summary Record

    International Nuclear Information System (INIS)

    2008-01-01

    The fifth workshop for the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT-5) was held on 31 March and 1 April 2008 in Garching, Germany. This international benchmark encourages advancement in the un-investigated fields of two-phase flow theory with very important relevance to the nuclear reactors' safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models on the prediction of detailed void distributions and critical powers. Furthermore, the following points were kept in mind while establishing the benchmark specification: As concerns the numerical model of void distributions, no sound theoretical approach that can be applied to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have tremendously improved though the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction is currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark is made up of two parts (phases), each part consisting of different exercises: - Phase I - Void Distribution Benchmark: Exercise 1 (I-1) - Steady-state sub-channel grade benchmark; Exercise 2 (I-2) - Steady-state microscopic grade benchmark; Exercise 3 (I-3) - Transient macroscopic grade benchmark; Exercise 4 (I-4) - Uncertainty analysis of the steady state sub-channel benchmark. - Phase II - Critical Power Benchmark: Exercise 0 (II-0) - Pressure drop benchmark; Exercise 1 (II-1) - Steady-state benchmark; Exercise 2 (II-2) - Transient benchmark; Exercise 3

  12. RECORD CLUB

    CERN Multimedia

    Record Club

    2010-01-01

    DVD James Bond – Series Complete To all Record Club Members, to start the new year, we have taken advantage of a special offer to add copies of all the James Bond movies to date, from the very first - Dr. No - to the latest - Quantum of Solace. No matter which of the successive 007s you prefer (Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan or Daniel Craig), they are all there. Or perhaps you have a favourite Bond Girl, or even perhaps a favourite villain. Take your pick. You can find the full selection listed on the club web site http://cern.ch/crc; use the panel on the left of the page “Discs of the Month” and select Jan 2010. We remind you that we are open on Mondays, Wednesdays and Fridays from 12:30 to 13:00 in Restaurant 2 (Bldg 504).

  13. Record dynamics

    DEFF Research Database (Denmark)

    Robe, Dominic M.; Boettcher, Stefan; Sibani, Paolo

    2016-01-01

    When quenched rapidly beyond their glass transition, colloidal suspensions fall out of equilibrium. The pace of their dynamics then slows down with the system age, i.e., with the time elapsed after the quench. This breaking of time translational invariance is associated with dynamical observables...... which depend on two time-arguments. The phenomenology is shared by a broad class of aging systems and calls for an equally broad theoretical description. The key idea is that, independent of microscopic details, aging systems progress through rare intermittent structural relaxations that are de......-facto irreversible and become increasingly harder to achieve. Thus, a progression of record-sized dynamical barriers are traversed in the approach to equilibration. Accordingly, the statistics of the events is closely described by a log-Poisson process. Originally developed for relaxation in spin glasses...

  14. Record breakers

    CERN Multimedia

    Antonella Del Rosso

    2012-01-01

    In the sixties, CERN’s Fellows were but a handful of about 50 young experimentalists present on site to complete their training. Today, their number has increased to a record-breaking 500. They come from many different fields and are spread across CERN’s different activity areas.   “Diversifying the Fellowship programme has been the key theme in recent years,” comments James Purvis, Head of the Recruitment, Programmes and Monitoring group in the HR Department. “In particular, the 2005 five-yearly review introduced the notion of ‘senior’ and ‘junior’ Fellowships, broadening the target audience to include those with Bachelor-level qualifications.” Diversification made CERN’s Fellowship programme attractive to a wider audience but the number of Fellows on site could not have increased so much without the support of EU-funded projects, which were instrumental in the growth of the programme. ...

  15. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  16. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  17. Variability of road traffic noise recorded by stationary monitoring stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek

    2017-11-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads out of the town in the direction of Kraków and Warszawa. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The coefficient of variation and positional variation were proposed for performing comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes. The differences concerned the values of coefficients of variation of the equivalent sound levels.

  18. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    Science.gov (United States)

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  19. Time measurements with a mobile device using sound

    Science.gov (United States)

    Wisman, Raymond F.; Spahn, Gabriel; Forinash, Kyle

    2018-05-01

    Data collection is a fundamental skill in science education, one that students generally practice in a controlled setting using equipment only available in the classroom laboratory. However, using smartphones with their built-in sensors and often free apps, many fundamental experiments can be performed outside the laboratory. Taking advantage of these tools often require creative approaches to data collection and exploring alternative strategies for experimental procedures. As examples, we present several experiments using smartphones and apps that record and analyze sound to measure a variety of physical properties.

  20. Verifying generalized soundness for workflow nets

    NARCIS (Netherlands)

    Hee, van K.M.; Oanea, O.I.; Sidorova, N.; Voorhoeve, M.; Virbitskaite, I.; Voronkov, A.

    2007-01-01

    We improve the decision procedure from [10] for the problem of generalized soundness of workflow nets. A workflow net is generalized sound iff every marking reachable from an initial marking with k tokens on the initial place terminates properly, i.e. it can reach a marking with k tokens on the

  1. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  2. 7 CFR 29.2550 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.2550 Section 29.2550 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing...-Cured Tobacco (u.s. Types 22, 23, and Foreign Type 96) § 29.2550 Sound. Free of damage. [37 FR 13626...

  3. 7 CFR 29.3546 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.3546 Section 29.3546 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Type 95) § 29.3546 Sound. Free of damage. [30 FR 9207, July 23, 1965. Redesignated at 49 FR 16759, Apr...

  4. 7 CFR 29.1058 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.1058 Section 29.1058 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Type 92) § 29.1058 Sound. Free of damage. [42 FR 21092, Apr. 25, 1977. Redesignated at 47 FR 51721, Nov...

  5. 7 CFR 29.3056 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.3056 Section 29.3056 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Sound. Free of damage. [24 FR 8771, Oct. 29, 1959. Redesignated at 47 FR 51722, Nov. 17, 1982, and at 49...

  6. Environmental Sound Training in Cochlear Implant Users

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian

    2015-01-01

    Purpose: The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method: Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart,…

  7. 7 CFR 29.6036 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.6036 Section 29.6036 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Definitions § 29.6036 Sound. Free of damage. (See Rule 4.) ...

  8. 7 CFR 29.2298 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.2298 Section 29.2298 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Official Standard Grades for Virginia Fire-Cured Tobacco (u.s. Type 21) § 29.2298 Sound...

  9. 33 CFR 117.309 - Nassau Sound.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Nassau Sound. 117.309 Section 117.309 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.309 Nassau Sound. The draw of the Fernandina Port...

  10. Scorescapes : on sound, environment and sonic consciousness

    NARCIS (Netherlands)

    Harris, Yolande

    2011-01-01

    This dissertation explores sound, its image and its role in relating humans and our technologies to the environment. It investigates two related questions: How does sound mediate our relationship to environment? And how can contemporary multidisciplinary art practices articulate and explore this

  11. The Impact of Sound Structure on Morphology

    DEFF Research Database (Denmark)

    Laaha, Sabine; Kjærbæk, Laila; Basbøll, Hans

    2011-01-01

    This study examines the impact of sound structure on children’s acquisition of noun plural morphology, focussing on stem change. For this purpose, a threelevel classification of stem change properties according to sound structure is presented, with increasing opacity of the plural stem: no change...

  12. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  13. Sound Levels in East Texas Schools.

    Science.gov (United States)

    Turner, Aaron Lynn

    A survey of sound levels was taken in several Texas schools to determine the amount of noise and sound present by size of class, type of activity, location of building, and the presence of air conditioning and large amounts of glass. The data indicate that class size and relative amounts of glass have no significant bearing on the production of…

  14. Sound-symbolism boosts novel word learning

    NARCIS (Netherlands)

    Lockwood, G.F.; Dingemanse, M.; Hagoort, P.

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally

  15. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. ISEE : An Intuitive Sound Editing Environment

    NARCIS (Netherlands)

    Vertegaal, R.P.H.; Bonis, E.

    1994-01-01

    This article presents ISEE, an intuitive sound editing environment, as a general sound synthesis model based on expert auditory perception and cognition of musical instruments. It discusses the backgrounds of current synthesizer user interface design and related timbre space research. Of the three

  17. Digital servo control of random sound fields

    Science.gov (United States)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  18. Wide-Screen Cinema and Stereophonic Sound.

    Science.gov (United States)

    Wysotsky, Michael Z.

    Developments in the techniques of wide screen cinema and stereophonic sound throughout the world are detailed in this book. Particular attention is paid to progress in the Soviet Union in these fields. Special emphasis is placed on the Soviet view of stereophonic sound as a vital adjunct in the search for enchanced realism as opposed to the…

  19. Sound insulation requirements in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    All Nordic countries have sound insulation requirements for housing and sound classification schemes originating from a common INSTA‐proposal in the mid 90’s, but unfortunately being increasingly diversified since then. The present situation impedes development and create barriers for trade and e...

  20. Is 1/f sound more effective than simple resting in reducing stress response?

    Science.gov (United States)

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  1. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  2. Operating room sound level hazards for patients and physicians.

    Science.gov (United States)

    Fritsch, Michael H; Chacko, Chris E; Patterson, Emily B

    2010-07-01

    Exposure to certain new surgical instruments and operating room devices during procedures could cause hearing damage to patients and personnel. Surgical instruments and related equipment generate significant sound levels during routine usage. Both patients and physicians are exposed to these levels during the operative cases, many of which can last for hours. The noise loads during cases are cumulative. Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) standards are inconsistent in their appraisals of potential damage. Implications of the newer power instruments are not widely recognized. Bruel and Kjaer sound meter spectral recordings for 20 major instruments from 5 surgical specialties were obtained at the ear levels for the patient and the surgeon between 32 and 20 kHz. Routinely used instruments generated sound levels as high as 131 dB. Patient and operator exposures differed. There were unilateral dominant exposures. Many instruments had levels that became hazardous well within the length of an average surgical procedure. The OSHA and NIOSH systems gave contradicting results when applied to individual instruments and types of cases. Background noise, especially in its intermittent form, was also of significant nature. Some patients and personnel have additional predisposing physiologic factors. Instrument noise levels for average length surgical cases may exceed OSHA and NIOSH recommendations for hearing safety. Specialties such as Otolaryngology, Orthopedics, and Neurosurgery use instruments that regularly exceed limits. General operating room noise also contributes to overall personnel exposures. Innovative countermeasures are suggested.

  3. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  4. Physiological phenotyping of dementias using emotional sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  5. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  6. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  7. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  8. Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-11-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.

  9. Summary record

    International Nuclear Information System (INIS)

    2007-01-01

    The second workshop for the OECD/NRC Benchmark based on NUPEC BWR Full-size Fine-mesh Bundle Tests (BFBT-2) was held on 27-29 of June 2005 in University Park, PA, USA. This international benchmark, based on the NUPEC database, encourages advancement in this un-investigated field of two-phase flow theory with very important relevance to the nuclear reactors' safety margins evaluation. Considering the immaturity of the theoretical approach, the benchmark specification is being designed so that it systematically assesses and compares the participants' numerical models on the prediction of detailed void distributions and critical powers. Furthermore, the following points were kept in mind when establishing the benchmark specification. As concerns the numerical model of void distributions, no sound theoretical approach that can be applied to a wide range of geometrical and operating conditions has been developed. In the past decade, experimental and computational technologies have tremendously improved though the study of the two-phase flow structure. Over the next decade, it can be expected that mechanistic approaches will be more widely applied to the complicated two-phase fluid phenomena inside fuel bundles. The development of truly mechanistic models for critical power prediction is currently underway. These models must include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The BFBT benchmark consists of two parts (phases), each part consisting of different exercises: - Phase 1 Void Distribution Benchmark: Exercise 1: Steady-state sub-channel grade benchmark; Exercise 2: Steady-state microscopic grade benchmark; Exercise 3: Transient macroscopic grade benchmark. - Phase 2 Critical Power Benchmark: Exercise 1: Steady-state benchmark; Exercise 2: Transient benchmark. It should be recognized that the purpose of this benchmark is not only the comparison of currently available macroscopic approaches but above-all the

  10. Record Club

    CERN Document Server

    Record Club

    2012-01-01

      March  Selections By the time this appears, we will have added a number of new CDs and DVDs into the Club. You will find the full lists at http://cern.ch/record.club; select the "Discs of the Month" button on the left panel of the web page and then Mar 2012. New films include recent releases such as Johnny English 2, Bad Teacher, Cowboys vs Aliens, and Super 8. We are also starting to acquire some of the classic films we missed when we initiated the DVD section of the club, such as appeared in a recent Best 100 Films published by a leading UK magazine; this month we have added Spielberg’s Jaws and Scorsese’s Goodfellas. If you have your own ideas on what we are missing, let us know. For children we have no less than 8 Tin-Tin DVDs. And if you like fast moving pop music, try the Beyonce concert DVD. New CDs include the latest releases from Paul McCartney, Rihanna and Amy Winehouse. There is a best of Mylene Farmer, a compilation from the NRJ 201...

  11. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  12. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  13. The sound and the fury

    OpenAIRE

    Mynett, Mark

    2010-01-01

    AS THE COST OF DIGITAL audio workstations\\ud (DAWs) and recording equipment has come down\\ud over the years, it’s become possible for musicians\\ud at all levels of income to produce their own\\ud songs. Unfortunately, this hasn’t guaranteed that\\ud everyone’s projects will meet with excellent results. Money still\\ud matters when it comes to hardware, software and the recording\\ud environment, as do the expertise and talent of the performers\\ud and producers.

  14. Aspirating and Nonaspirating Swallow Sounds in Children: A Pilot Study.

    Science.gov (United States)

    Frakking, Thuy; Chang, Anne; O'Grady, Kerry; David, Michael; Weir, Kelly

    2016-12-01

    Cervical auscultation (CA) may be used to complement feeding/swallowing evaluations when assessing for aspiration. There are no published pediatric studies that compare the properties of sounds between aspirating and nonaspirating swallows. To establish acoustic and perceptual profiles of aspirating and nonaspirating swallow sounds and determine if a difference exists between these 2 swallowing types. Aspiration sound clips were obtained from recordings using CA simultaneously undertaken with videofluoroscopic swallow study. Aspiration was determined using the Penetration-Aspiration Scale. The presence of perceptual swallow/breath parameters was rated by 2 speech pathologists who were blinded to the type of swallow. Acoustic data between groups were compared using Mann Whitney U-tests, while perceptual differences were determined by a test of 2 proportions. Combinations of perceptual parameters of 50 swallows (27 aspiration, 23 no aspiration) from 47 children (57% male) were statistically analyzed using area under a receiver operating characteristic (aROC), sensitivity, specificity, and positive and negative predictive values to determine predictors of aspirating swallows. The combination of post-swallow presence of wet breathing and wheeze and absence of GRS and normal breathing was the best predictor of aspiration (aROC = 0.82, 95% CI, 0.70-0.94). There were no significant differences between these 2 swallow types for peak frequency, duration, and peak amplitude. Our pilot study has shown that certain characteristics of swallow obtained using CA may be useful in the prediction of aspiration. However, further research comparing the acoustic swallowing sound profiles of normal children to children with dysphagia (who are aspirating) on a larger scale is required. © The Author(s) 2016.

  15. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  16. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    Full Text Available Terufumi Shimoda,1 Yasushi Obase,2 Yukio Nagasaka,3 Hiroshi Nakano,1 Akiko Ishimatsu,1 Reiko Kishikawa,1 Tomoaki Iwanaga1 1Clinical Research Center, Fukuoka National Hospital, Fukuoka, 2Second Department of Internal Medicine, School of Medicine, Nagasaki University, Nagasaki, 3Kyoto Respiratory Center, Otowa Hospital, Kyoto, Japan Purpose: Airway inflammation can be detected by lung sound analysis (LSA at a single point in the posterior lower lung field. We performed LSA at 7 points to examine whether the technique could identify the location of airway inflammation in patients with asthma. Patients and methods: Breath sounds were recorded at 7 points on the body surface of 22 asthmatic subjects. Inspiration sound pressure level (ISPL, expiration sound pressure level (ESPL, and the expiration-to-inspiration sound pressure ratio (E/I were calculated in 6 frequency bands. The data were analyzed for potential correlation with spirometry, airway hyperresponsiveness (PC20, and fractional exhaled nitric oxide (FeNO. Results: The E/I data in the frequency range of 100–400 Hz (E/I low frequency [LF], E/I mid frequency [MF] were better correlated with the spirometry, PC20, and FeNO values than were the ISPL or ESPL data. The left anterior chest and left posterior lower recording positions were associated with the best correlations (forced expiratory volume in 1 second/forced vital capacity: r=–0.55 and r=–0.58; logPC20: r=–0.46 and r=–0.45; and FeNO: r=0.42 and r=0.46, respectively. The majority of asthmatic subjects with FeNO ≥70 ppb exhibited high E/I MF levels in all lung fields (excluding the trachea and V50%pred <80%, suggesting inflammation throughout the airway. Asthmatic subjects with FeNO <70 ppb showed high or low E/I MF levels depending on the recording position, indicating uneven airway inflammation. Conclusion: E/I LF and E/I MF are more useful LSA parameters for evaluating airway inflammation in bronchial asthma; 7-point lung

  17. IFLA General Conference, 1984. Management and Technology Division. Section on Information Technology and Joint Meeting of the Round Table Audiovisual Media, the International Association for Sound Archives, and the International Association for Music Libraries. Papers.

    Science.gov (United States)

    International Federation of Library Associations, The Hague (Netherlands).

    Six papers on information technology, the development of information systems for Third World countries, handling of sound recordings, and library automation were presented at the 1984 IFLA conference. They include: (1) "Handling, Storage and Preservation of Sound Recordings under Tropical and Subtropical Climatic Conditions" (Dietrich…

  18. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  19. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  20. Soft computing based feature selection for environmental sound classification

    NARCIS (Netherlands)

    Shakoor, A.; May, T.M.; Van Schijndel, N.H.

    2010-01-01

    Environmental sound classification has a wide range of applications,like hearing aids, mobile communication devices, portable media players, and auditory protection devices. Sound classification systemstypically extract features from the input sound. Using too many features increases complexity

  1. Sound waves in hadronic matter

    Science.gov (United States)

    Wilk, Grzegorz; Włodarczyk, Zbigniew

    2018-01-01

    We argue that recent high energy CERN LHC experiments on transverse momenta distributions of produced particles provide us new, so far unnoticed and not fully appreciated, information on the underlying production processes. To this end we concentrate on the small (but persistent) log-periodic oscillations decorating the observed pT spectra and visible in the measured ratios R = σdata(pT) / σfit (pT). Because such spectra are described by quasi-power-like formulas characterised by two parameters: the power index n and scale parameter T (usually identified with temperature T), the observed logperiodic behaviour of the ratios R can originate either from suitable modifications of n or T (or both, but such a possibility is not discussed). In the first case n becomes a complex number and this can be related to scale invariance in the system, in the second the scale parameter T exhibits itself log-periodic oscillations which can be interpreted as the presence of some kind of sound waves forming in the collision system during the collision process, the wave number of which has a so-called self similar solution of the second kind. Because the first case was already widely discussed we concentrate on the second one and on its possible experimental consequences.

  2. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  3. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  4. Misconceptions About Sound Among Engineering Students

    Science.gov (United States)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  5. Effects of temperature on sound production and auditory abilities in the Striped Raphael catfish Platydoras armatulus (Family Doradidae.

    Directory of Open Access Journals (Sweden)

    Sandra Papes

    Full Text Available Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3-5 ms. The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in

  6. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    while listeners wore both earplugs and blindfolds. Half of the listeners were then blindfolded during testing but were provided auditory awareness of the room via a controlled noise source (condition A). The other half could see the room but were shielded from room-related acoustic input and tested......Background: The externalization of virtual sounds reproduced via binaural headphone-based auralization systems has been reported to be less robust when the listening environment differs from the room in which binaural room impulse responses (BRIRs) were recorded. It has been debated whether.......Methods: Eighteen naïve listeners rated the externalization of virtual stimuli in terms of perceived distance, azimuthal localization, and compactness in three rooms: 1) a standard IEC listening room, 2) a small reverberant room, and 3) a large dry room. Before testing, individual BRIRs were recorded in room 1...

  7. Frictional Sound Analysis by Simulating the Human Arm Movement

    Directory of Open Access Journals (Sweden)

    Yosouf Khaldon

    2017-03-01

    Full Text Available Fabric noise generated by fabric-to-fabric friction is considered as one of the auditory disturbances that can have an impact on the quality of some textile products. For this reason, an instrument has been developed to analyse this phenomenon. The instrument is designed to simulate the relative movement of a human arm when walking. In order to understand the nature of the relative motion of a human arm, films of the upper half of the human body were taken. These films help to define the parameters required for movement simulation. These parameters are movement trajectory, movement velocity, arm pressure applied on the lateral part of the trunk and the friction area. After creating the instrument, a set of soundtracks related to the noise generated by fabric-to-fabric friction was recorded. The recordings were treated with a specific software to extract the sound parameters and the acoustic imprints of fabric were obtained.

  8. A level switch with a sound tube

    OpenAIRE

    赤池, 誠規

    2017-01-01

    Level switches are sensor with an electrical contact output at a specific liquid, powder or bulk level. Most of traditional level switches are not suitable for harsh environments. The level switch in this study connects a loudspeaker on top end of the sound tube. When liquid, powder or bulk closes bottom end of the sound tube, the level switch turns on. The level switch is suitable for harsh environments and easy to install. The aim of this study is to propose a level switch with a sound tube...

  9. Urban Noise and Strategies of Sound Mapping

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    project from the Copenhagen Munincipelity initiated in 2006, as a starting point to discuss the politics of urban sound. It points out an important challenge for the methodology of urban sonic environments: namely that sound as a senso-motoric register may be poorly evaluated through concepts of noise...... practices as a kind of social interaction – a method that may supplement the engineer’s quantitative sound measurements and the landscape architect’s qualitative descriptors this article outlines a few approaches to a theory of acoustic territoriality and suggests alternative ways of mapping, analyzing...

  10. Interactively Evolving Compositional Sound Synthesis Networks

    DEFF Research Database (Denmark)

    Jónsson, Björn Þór; Hoover, Amy K.; Risi, Sebastian

    2015-01-01

    the space of potential sounds that can be generated through such compositional sound synthesis networks (CSSNs). To study the effect of evolution on subjective appreciation, participants in a listener study ranked evolved timbres by personal preference, resulting in preferences skewed toward the first......While the success of electronic music often relies on the uniqueness and quality of selected timbres, many musicians struggle with complicated and expensive equipment and techniques to create their desired sounds. Instead, this paper presents a technique for producing novel timbres that are evolved...

  11. Sound Performance – Experience and Event

    DEFF Research Database (Denmark)

    Holmboe, Rasmus

    . The present paper draws on examples from my ongoing PhD-project, which is connected to Museum of Contemporary Art in Roskilde, Denmark, where I curate a sub-programme at ACTS 2014 – a festival for performative arts. The aim is to investigate, how sound performance can be presented and represented - in real....... In itself – and as an artistic material – sound is always already process. It involves the listener in a situation that is both filled with elusive presence and one that evokes rooted memory. At the same time sound is bodily, social and historical. It propagates between individuals and objects, it creates...

  12. Separating underwater ambient noise from flow noise recorded on stereo acoustic tags attached to marine mammals

    NARCIS (Netherlands)

    Benda-Beckmann, A.M. von; Wensveen, P.J.; Samarra, F.I.P.; Beerens, S.P.; Miller, P.J.O.

    2016-01-01

    Sound-recording acoustic tags attached to marine animals are commonly used in behavioural studies. Measuring ambient noise is of interest to efforts to understand responses of marine mammals to anthropogenic underwater sound, or to assess their communication space. Noise of water flowing around the

  13. Mapping Phonetic Features for Voice-Driven Sound Synthesis

    Science.gov (United States)

    Janer, Jordi; Maestre, Esteban

    In applications where the human voice controls the synthesis of musical instruments sounds, phonetics convey musical information that might be related to the sound of the imitated musical instrument. Our initial hypothesis is that phonetics are user- and instrument-dependent, but they remain constant for a single subject and instrument. We propose a user-adapted system, where mappings from voice features to synthesis parameters depend on how subjects sing musical articulations, i.e. note to note transitions. The system consists of two components. First, a voice signal segmentation module that automatically determines note-to-note transitions. Second, a classifier that determines the type of musical articulation for each transition based on a set of phonetic features. For validating our hypothesis, we run an experiment where subjects imitated real instrument recordings with their voice. Performance recordings consisted of short phrases of saxophone and violin performed in three grades of musical articulation labeled as: staccato, normal, legato. The results of a supervised training classifier (user-dependent) are compared to a classifier based on heuristic rules (user-independent). Finally, from the previous results we show how to control the articulation in a sample-concatenation synthesizer by selecting the most appropriate samples.

  14. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  15. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2014-07-01

    Full Text Available Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel to compare memory-related MMN and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians. In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  16. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.

    Science.gov (United States)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  17. Underwater sound from vessel traffic reduces the effective communication range in Atlantic cod and haddock.

    Science.gov (United States)

    Stanley, Jenni A; Van Parijs, Sofie M; Hatch, Leila T

    2017-11-07

    Stellwagen Bank National Marine Sanctuary is located in Massachusetts Bay off the densely populated northeast coast of the United States; subsequently, the marine inhabitants of the area are exposed to elevated levels of anthropogenic underwater sound, particularly due to commercial shipping. The current study investigated the alteration of estimated effective communication spaces at three spawning locations for populations of the commercially and ecologically important fishes, Atlantic cod (Gadus morhua) and haddock (Melanogrammus aeglefinus). Both the ambient sound pressure levels and the estimated effective vocalization radii, estimated through spherical spreading models, fluctuated dramatically during the three-month recording periods. Increases in sound pressure level appeared to be largely driven by large vessel activity, and accordingly exhibited a significant positive correlation with the number of Automatic Identification System tracked vessels at the two of the three sites. The near constant high levels of low frequency sound and consequential reduction in the communication space observed at these recording sites during times of high vocalization activity raises significant concerns that communication between conspecifics may be compromised during critical biological periods. This study takes the first steps in evaluating these animals' communication spaces and alteration of these spaces due to anthropogenic underwater sound.

  18. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  19. Specially Designed Sound-Boxes Used by Students to Perform School-Lab Sensor–Based Experiments, to Understand Sound Phenomena

    Directory of Open Access Journals (Sweden)

    Stefanos Parskeuopoulos

    2011-02-01

    Full Text Available The research presented herein investigates and records students’ perceptions relating to sound phenomena and their improvement during a specialised laboratory practice utilizing ICT and a simple experimental apparatus, especially designed for teaching. This school-lab apparatus and its operation are also described herein. A number of 71 first and second grade Vocational-school students, aged 16 to 20, participated in the research. These were divided into groups of 4-5 students, each of which worked for 6 hours in order to complete all activities assigned. Data collection was carried out through personal interviews as well as questionnaires which were distributed before and after the instructive intervention. The results shows that students’ active involvement with the simple teaching apparatus, through which the effects of sound waves are visible, helps them comprehend sound phenomena. It also altered considerably their initial misconceptions about sound propagation. The results are presented diagrammatically herein, while some important observations are made, relating to the teaching and learning of scientific concepts concerning sound.

  20. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows.

    Science.gov (United States)

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Röösli, Martin; Brink, Mark; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-18

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios-of open, tilted, and closed windows-were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor-indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor-indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows.

  1. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows

    Science.gov (United States)

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-01

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios—of open, tilted, and closed windows—were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor–indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor–indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows. PMID:29346318

  2. The insider's guide to home recording record music and get paid

    CERN Document Server

    Tarquin, Brian

    2015-01-01

    The last decade has seen an explosion in the number of home-recording studios. With the mass availability of sophisticated technology, there has never been a better time to do it yourself and make a profit.Take a studio journey with Brian Tarquin, the multiple-Emmy-award winning recording artist and producer, as he leads you through the complete recording process, and shows you how to perfect your sound using home equipment. He guides you through the steps to increase your creative freedom, and offers numerous tips to improve the effectiveness of your workflow. Topics covered in this book incl

  3. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface.

    Directory of Open Access Journals (Sweden)

    Magdalena Ryżak

    Full Text Available The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing-most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon's characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa. We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop. The highest sound pressure level (and the greatest variability was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops.

  4. Behavioural Response Thresholds in New Zealand Crab Megalopae to Ambient Underwater Sound

    Science.gov (United States)

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2011-01-01

    A small number of studies have demonstrated that settlement stage decapod crustaceans are able to detect and exhibit swimming, settlement and metamorphosis responses to ambient underwater sound emanating from coastal reefs. However, the intensity of the acoustic cue required to initiate the settlement and metamorphosis response, and therefore the potential range over which this acoustic cue may operate, is not known. The current study determined the behavioural response thresholds of four species of New Zealand brachyuran crab megalopae by exposing them to different intensity levels of broadcast reef sound recorded from their preferred settlement habitat and from an unfavourable settlement habitat. Megalopae of the rocky-reef crab, Leptograpsus variegatus, exhibited the lowest behavioural response threshold (highest sensitivity), with a significant reduction in time to metamorphosis (TTM) when exposed to underwater reef sound with an intensity of 90 dB re 1 µPa and greater (100, 126 and 135 dB re 1 µPa). Megalopae of the mud crab, Austrohelice crassa, which settle in soft sediment habitats, exhibited no response to any of the underwater reef sound levels. All reef associated species exposed to sound levels from an unfavourable settlement habitat showed no significant change in TTM, even at intensities that were similar to their preferred reef sound for which reductions in TTM were observed. These results indicated that megalopae were able to discern and respond selectively to habitat-specific acoustic cues. The settlement and metamorphosis behavioural response thresholds to levels of underwater reef sound determined in the current study of four species of crabs, enables preliminary estimation of the spatial range at which an acoustic settlement cue may be operating, from 5 m to 40 km depending on the species. Overall, these results indicate that underwater sound is likely to play a major role in influencing the spatial patterns of settlement of coastal crab

  5. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface.

    Science.gov (United States)

    Ryżak, Magdalena; Bieganowski, Andrzej; Korbiel, Tomasz

    2016-01-01

    The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing-most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon's characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem) with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa). We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop). The highest sound pressure level (and the greatest variability) was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability) was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops.

  6. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. On the use of binaural recordings for dynamic binaural reproduction

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Christensen, Flemming

    2011-01-01

    Binaural recordings are considered applicable only for static binaural reproduction. That is, playback of binaural recordings can only reproduce the sound field captured for the fixed position and orientation of the recording head. However, given some conditions it is possible to use binaural...... recordings for the reproduction of binaural signals that change according to the listener actions, i.e. dynamic binaural reproduction. Here we examine the conditions that allow for such dynamic recording/playback configuration and discuss advantages and disadvantages. Analysis and discussion focus on two...

  8. AFSC/ABL: Salisbury Sound sponge recovery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In 1995, an area of the seafloor near Salisbury Sound was trawled to identify immediate effects on large, erect sponges and sea whips. Video transects were made in...

  9. Prince William Sound, Alaska ESI: HYDRO (Hydrology)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set comprises the Environmental Sensitivity Index (ESI) data for Prince William Sound, Alaska. ESI data characterize estuarine environments and wildlife by...

  10. Sound transit climate risk reduction project.

    Science.gov (United States)

    2013-09-01

    The Climate Risk Reduction Project assessed how climate change may affect Sound Transit commuter rail, light rail, and express bus : services. The project identified potential climate change impacts on agency operations, assets, and long-term plannin...

  11. Boundary effects on sound propagation in superfluids

    International Nuclear Information System (INIS)

    Jensen, H.H.; Smith, H.; Woelfle, P.

    1983-01-01

    The attenuation of fourth sound propagating in a superfluid confined within a channel is determined on a microscopic basis, taking into account the scatter of the quasiparticles from the walls. The Q value of a fourth-sound resonance is shown to be inversely proportional to the stationary flow of thermal excitations through the channel due to an external force. Our theoretical estimates of Q are compared with experimentally observed values for 3 He. The transition between first and fourth sound is studied in detail on the basis of two-fluid hydrodynamics, including the slip of the normal component at the walls. The slip is shown to have a strong influence on the velocity and attenuation in the transition region between first and fourth sound, offering a means to examine the interaction of quasiparticles with a solid surface

  12. Dissipation in vibrating superleak second sound transducers

    International Nuclear Information System (INIS)

    Giordano, N.

    1985-01-01

    We have performed an experimental study of the generation and detection of second sound in 4 He using vibrating superleak second sound transducers. At temperatures well below T/sub lambda/ and for low driving amplitudes, the magnitude of the generated second sound wave is proportional to the drive amplitude. However, near T/sub lambda/ and for high drive amplitudes this is no longer the case--instead, the second sound amplitude saturates. In this regime we also find that overtones of the drive frequency are generated. Our results suggest that this behavior is due to critical velocity effects in the pores of the superleak in the generator transducer. This type of measurement may prove to be a useful way in which to study critical velocity effects in confined geometries

  13. Heart sounds: are you listening? Part 1.

    Science.gov (United States)

    Reimer-Kent, Jocelyn

    2013-01-01

    All nurses should have an understanding of heart sounds and be proficient in cardiac auscultation. Unfortunately, this skill is not part of many nursing school curricula, nor is it necessarily a required skillfor employment. Yet, being able to listen and accurately describe heart sounds has tangible benefits to the patient, as it is an integral part of a complete cardiac assessment. In this two-part article, I will review the fundamentals of cardiac auscultation, how cardiac anatomy and physiology relate to heart sounds, and describe the various heart sounds. Whether you are a beginner or a seasoned nurse, it is never too early or too late to add this important diagnostic skill to your assessment tool kit.

  14. Acoustic quality and sound insulation between dwellings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    1998-01-01

    to another, however, several of the results show a slope around 4 % per dB. The results may be used to evaluate the acoustic quality level of a certain set of sound insulation requirements, or they may be used as a basis for specifying the desired acoustic quality of future buildings......During the years there have been several large field investigations in different countries with the aim to find a relationship between sound insulation between dwellings and the subjective degree of annoyance. This paper presents an overview of the results, and the difficulties in comparing...... the different findings are discussed. It is tried to establish dose-response relationships between airborne sound insulation or impact sound pressure level according to ISO 717 and the percentage of people being annoyed by noise from neighbours. The slopes of the dose-response curves vary from one investigation...

  15. Acoustic quality and sound insulation between dwellings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    1999-01-01

    to another, however, several of the results show a slope around 4 % per dB. The results may be used to evaluate the acoustic quality level of a certain set of sound insulation requirements, or they may be used as a basis for specifying the desired acoustic quality of future buildings.......During the years there have been several large field investigations in different countries with the aim to find a relationship between sound insulation between dwellings and the subjective degree of annoyance. This paper presents an overview of the results, and the dif-ficulties in comparing...... the different findings are discussed. It is tried to establish dose-response relationships between airborne sound insulation or impact sound pressure level according to ISO 717 and the percentage of people being annoyed by noise from neighbours. The slopes of the dose-response curves vary from one investigation...

  16. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  17. Ionospheric Oblique Incidence Soundings by Satellites

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The oblique incidence sweep-frequency ionospheric sounding technique uses the same principle of operation as the vertical incidence sounder. The primary difference...

  18. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  19. Prince William Sound, Alaska ESI: INVERT (Invertebrates)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set comprises the Environmental Sensitivity Index (ESI) data for Prince William Sound, Alaska. ESI data characterize estuarine environments and wildlife by...

  20. Brief report: sound output of infant humidifiers.

    Science.gov (United States)

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.