WorldWideScience

Sample records for soundness test

  1. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  2. Correlation between hypoosmotic swelling test and breeding soundness evaluation of adult Nelore bulls

    Directory of Open Access Journals (Sweden)

    Tamires Miranda Neto

    2011-07-01

    Full Text Available This study aimed at evaluating the relationship between physical and morphological semen features with the hypoosmotic swelling (HOS test in raw semen of adult Nelore bulls classified as sound and unsound for breeding. Two hundred and six Nelore bulls aging from 3-10 years old were subjected to breeding soundness examination. After physical and morphological semen examination, HOS test was done. After the breeding soundness examination, 94.2% of the bulls were classified as sound for breeding. There was no difference between the average scrotal circumference of bulls classified as sound and unsound for breeding (P>0.05, but there was difference between all semen physical and morphological aspects of bulls classified as sound and unsound for breeding (P>0.05, but there was no difference in the mean percentage of reactive spermatozoa to HOS test results both for sound (38.4±17.9 and unsound animals (39.5±16.4; P>0.05, with no Pearson correlation between the HOS test and variables. According to these results HOS test can not be used alone to predict the reproductive potential of adult Nelore bulls.

  3. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  4. Soundness confirmation through cold test of the system equipment of HTTR

    International Nuclear Information System (INIS)

    Ono, Masato; Shinohara, Masanori; Iigaki, Kazuhiko; Tochio, Daisuke; Nakagawa, Shigeaki; Shimazaki, Yosuke

    2014-01-01

    HTTR was established at the Oarai Research and Development Center of Japan Atomic Energy Agency, for the purpose of the establishment and upgrading of high-temperature gas-cooled reactor technology infrastructure. Currently, it performs a safety demonstration test in order to demonstrate the safety inherent in high-temperature gas-cooled reactor. After the Great East Japan Earthquake, it conducted confirmation test for the purpose of soundness survey of facilities and equipment, and it confirmed that the soundness of the equipment was maintained. After two years from the confirmation test, it has not been confirmed whether the function of dynamic equipment and the soundness such as the airtightness of pipes and containers are maintained after receiving the influence of damage or deterioration caused by aftershocks generated during two years or aging. To confirm the soundness of these facilities, operation under cold state was conducted, and the obtained plant data was compared with confirmation test data to evaluate the presence of abnormality. In addition, in order to confirm through cold test the damage due to aftershocks and degradation due to aging, the plant data to compare was supposed to be the confirmation test data, and the evaluation on abnormality of the plant data of machine starting time and normal operation data was performed. (A.O.)

  5. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  6. Sound preference test in animal models of addicts and phobias.

    Science.gov (United States)

    Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu

    2016-08-01

    Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.

  7. Can joint sound assess soft and hard endpoints of the Lachman test?: A preliminary study.

    Science.gov (United States)

    Hattori, Koji; Ogawa, Munehiro; Tanaka, Kazunori; Matsuya, Ayako; Uematsu, Kota; Tanaka, Yasuhito

    2016-05-12

    The Lachman test is considered to be a reliable physical examination for anterior cruciate ligament (ACL) injury. Patients with a damaged ACL demonstrate a soft endpoint feeling. However, examiners judge the soft and hard endpoints subjectively. The purpose of our study was to confirm objective performance of the Lachman test using joint auscultation. Human and porcine knee joints were examined. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of Lachman sound, the peak sound as the maximum relative amplitude (acoustic pressure) and its frequency were used. The mean Lachman peak sound for healthy volunteer knees was 86.9 ± 12.9 Hz in frequency and -40 ± 2.5 dB in acoustic pressure. The mean Lachman peak sound for intact porcine knees was 84.1 ± 9.4 Hz and -40.5 ± 1.7 dB. Porcine knees with ACL deficiency had a soft endpoint feeling during the Lachman test. The Lachman peak sounds of porcine knees with ACL deficiency were dispersed into four distinct groups, with center frequencies of around 40, 160, 450, and 1600. The Lachman peak sound was capable of assessing soft and hard endpoints of the Lachman test objectively.

  8. Multidimensional Approach to the Development of a Mandarin Chinese-Oriented Sound Test

    Science.gov (United States)

    Hung, Yu-Chen; Lin, Chun-Yi; Tsai, Li-Chiun; Lee, Ya-Jung

    2016-01-01

    Purpose: Because the Ling six-sound test is based on American English phonemes, it can yield unreliable results when administered to non-English speakers. In this study, we aimed to improve specifically the diagnostic palette for Mandarin Chinese users by developing an adapted version of the Ling six-sound test. Method: To determine the set of…

  9. Psychometric characteristics of single-word tests of children's speech sound production.

    Science.gov (United States)

    Flipsen, Peter; Ogiela, Diane A

    2015-04-01

    Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984). The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource that clinicians may use to help them make test selection decisions for their particular client populations. Ten tests published since 1990 were reviewed to determine whether they met the 10 criteria set out by McCauley and Swisher (1984), as well as 7 additional criteria. All of the tests reviewed met at least 3 of McCauley and Swisher's (1984) original criteria, and 9 of 10 tests met at least 5 of them. Most of the tests met some of the additional criteria as well. The state of the art for single-word tests of speech sound production in children appears to have improved in the last 30 years. There remains, however, room for improvement.

  10. Monitoring of surface chemical and underground nuclear explosions with help of ionospheric radio-sounding above test site

    International Nuclear Information System (INIS)

    Krasnov, V.M.; Drobzheva, Ya.V.

    2000-01-01

    We describe the basic principles, advantages and disadvantages of ionospheric method to monitor surface chemical and underground nuclear explosions. The ionosphere is 'an apparatus' for the infra-sound measurements immediately above the test site. Using remote radio sounding of the ionosphere you can obtain that information. So you carry out the inspection at the test site. The main disadvantage of the ionospheric method is the necessity to sound the ionosphere with radio waves. (author)

  11. Testing Cosmology with Cosmic Sound Waves

    CERN Document Server

    Corasaniti, Pier Stefano

    2008-01-01

    WMAP observations have accurately determined the position of the first two peaks and dips in the CMB temperature power spectrum. These encode information on the ratio of the distance to the last scattering surface to the sound horizon at decoupling. However pre-recombination processes can contaminate this distance information. In order to assess the amplitude of these effects we use the WMAP data and evaluate the relative differences of the CMB peaks and dips multipoles. We find that the position of the first peak is largely displaced with the respect to the expected position of the sound horizon scale at decoupling. In contrast the relative spacings of the higher extrema are statistically consistent with those expected from perfect harmonic oscillations. This provides evidence for a scale dependent phase shift of the CMB oscillations which is caused by gravitational driving forces affecting the propagation of sound waves before recombination. By accounting for these effects we have performed a MCMC likelihoo...

  12. Time-domain electromagnetic soundings at the Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Frischknecht, F.C.; Raab, P.V.

    1984-01-01

    Structural discontinuities and variations in the resistivity of near-surface rocks often seriously distort dc resistivity and frequency-domain electromagnetic (FDEM) depth sounding curves. Reliable interpretation of such curves using one-dimensional (1-D) models is difficult or impossible. Short-offset time-domain electromagnetic (TDEM) sounding methods offer a number of advantages over other common geoelectrical sounding methods when working in laterally heterogeneous areas. In order to test the TDEM method in a geologically complex region, measurements were made on the east flank of Yucca Mountain at the Nevada Test Site (NTS). Coincident, offset coincident, single, and central loop configurations with square transmitting loops, either 305 or 152 m on a side, were used. Measured transient voltages were transformed into apparent resistivity values and then inverted in terms of 1-D models. Good fits to all of the offset coincident and single loop data were obtained using three-layer models. In most of the area, two well-defined interfaces were mapped, one which corresponds closely to a contact between stratigraphic units at a depth of about 400 m and another which corresponds to a transition from relatively unaltered to altered volcanic rocks at a depth of about 1000 m. In comparison with the results of a dipole-dipole resistivity survey, the results of the TDEM survey emphasize changes in the geoelectrical section with depth. Nonetheless, discontinuities in the layering mapped with the TDEM method delineated major faults or fault zones along the survey traverse. 5 refs., 10 figs., 1 tab

  13. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  14. 40 CFR 205.54-1 - Low speed sound emission test procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Low speed sound emission test procedures. 205.54-1 Section 205.54-1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Medium and Heavy Trucks § 205...

  15. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  16. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    Science.gov (United States)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  17. Examination of optimum test conditions for a 3-point bending and cutting test to evaluate sound emission of wafer during deformation

    Directory of Open Access Journals (Sweden)

    Erdem Carsanba

    2018-04-01

    Full Text Available The purpose of this study was to investigate optimum test conditions of acoustical-mechanical measurement of wafer analysed by Acoustic Envelope Detector attached to the Texture Analyser. Force-displacement and acoustic signals were simultaneously recorded applying two different methods (3-point bending and cutting test. In order to study acoustical-mechanical behaviour of wafers, the parameters “maximum sound pressure”, “total count peaks” and “mean sound value” were used and optimal test conditions of microphone position and test speed were examined. With a microphone position of 45° angle and 1 cm distance and at a low test speed of 0.5 mm/s wafers of different quality could be distinguished best. The angle of microphone did not have significant effect on acoustic results and the number of peaks of the force and acoustic signal decreased with increasing distance and test speed.

  18. Anticipated Effectiveness of Active Noise Control in Propeller Aircraft Interiors as Determined by Sound Quality Tests

    Science.gov (United States)

    Powell, Clemans A.; Sullivan, Brenda M.

    2004-01-01

    Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.

  19. Sound field control for a low-frequency test facility

    DEFF Research Database (Denmark)

    Pedersen, Christian Sejer; Møller, Henrik

    2013-01-01

    The two largest problems in controlling the reproduction of low-frequency sound for psychoacoustic experiments is the effect of the room due to standing waves and the relatively large sound pressure levels needed. Anechoic rooms are limited downward in frequency and distortion may be a problem even...... at moderate levels, while pressure-field playback can give higher sound pressures but is limited upwards in frequency. A new solution that addresses both problems has been implemented in the laboratory of Acoustics, Aalborg University. The solution uses one wall with 20 loudspeakers to generate a plane wave...... that is actively absorbed when it reaches the 20 loudspeakers on the opposing wall. This gives a homogeneous sound field in the majority of the room with a flat frequency response in the frequency range 2-300 Hz. The lowest frequencies are limited to sound pressure levels in the order of 95 dB. If larger levels...

  20. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation

    International Nuclear Information System (INIS)

    Crostack, H.A.; Pohl, K.Y.; Radtke, U.

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO 2 layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.) [de

  1. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    Science.gov (United States)

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3

  2. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian Maria; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multichannel reproduced sound. Ninety-one participants filled in a web-based questionnaire. Seventy-eight of them took part in an assessment of their hearing thresholds......, their spatial hearing, and their verbal production abilities. The listeners displayed large individual differences in their performance. Forty subjects were selected based on the test results. The self-assessed listening habits and experience in the web-questionnaire could not predict the results...... of the selection procedure. Further, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  3. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  4. Designing, Modeling, Constructing, and Testing a Flat Panel Speaker and Sound Diffuser for a Simulator

    Science.gov (United States)

    Dillon, Christina

    2013-01-01

    The goal of this project was to design, model, build, and test a flat panel speaker and frame for a spherical dome structure being made into a simulator. The simulator will be a test bed for evaluating an immersive environment for human interfaces. This project focused on the loud speakers and a sound diffuser for the dome. The rest of the team worked on an Ambisonics 3D sound system, video projection system, and multi-direction treadmill to create the most realistic scene possible. The main programs utilized in this project, were Pro-E and COMSOL. Pro-E was used for creating detailed figures for the fabrication of a frame that held a flat panel loud speaker. The loud speaker was made from a thin sheet of Plexiglas and 4 acoustic exciters. COMSOL, a multiphysics finite analysis simulator, was used to model and evaluate all stages of the loud speaker, frame, and sound diffuser. Acoustical testing measurements were utilized to create polar plots from the working prototype which were then compared to the COMSOL simulations to select the optimal design for the dome. The final goal of the project was to install the flat panel loud speaker design in addition to a sound diffuser on to the wall of the dome. After running tests in COMSOL on various speaker configurations, including a warped Plexiglas version, the optimal speaker design included a flat piece of Plexiglas with a rounded frame to match the curvature of the dome. Eight of these loud speakers will be mounted into an inch and a half of high performance acoustic insulation, or Thinsulate, that will cover the inside of the dome. The following technical paper discusses these projects and explains the engineering processes used, knowledge gained, and the projected future goals of this project

  5. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    Science.gov (United States)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  6. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  7. Perception of environmental sounds by experienced cochlear implant patients

    Science.gov (United States)

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Objectives Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli, may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Design Seventeen experienced postlingually-deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception, and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern and temporal order for tones tests) and a backward digit recall test. Results The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants and r = 0.48 for vowels. HINT and

  8. HESTIA Commodities Exchange Pallet and Sounding Rocket Test Stand

    Science.gov (United States)

    Chaparro, Javier

    2013-01-01

    During my Spring 2016 internship, my two major contributions were the design of the Commodities Exchange Pallet and the design of a test stand for a 100 pounds-thrust sounding rocket. The Commodities Exchange Pallet is a prototype developed for the Human Exploration Spacecraft Testbed for Integration and Advancement (HESTIA) program. Under the HESTIA initiative the Commodities Exchange Pallet was developed as a method for demonstrating multi-system integration thru the transportation of In-Situ Resource Utilization produced oxygen and water to a human habitat. Ultimately, this prototype's performance will allow for future evaluation of integration, which may lead to the development of a flight capable pallet for future deep-space exploration missions. For HESTIA, my main task was to design the Commodities Exchange Pallet system to be used for completing an integration demonstration. Under the guidance of my mentor, I designed, both, the structural frame and fluid delivery system for the commodities pallet. The fluid delivery system includes a liquid-oxygen to gaseous-oxygen system, a water delivery system, and a carbon-dioxide compressors system. The structural frame is designed to meet safety and transportation requirements, as well as the ability to interface with the ER division's Portable Utility Pallet. The commodities pallet structure also includes independent instrumentation oxygen/water panels for operation and system monitoring. My major accomplishments for the commodities exchange pallet were the completion of the fluid delivery systems and the structural frame designs. In addition, parts selection was completed in order to expedite construction of the prototype, scheduled to begin in May of 2016. Once the commodities pallet is assembled and tested it is expected to complete a fully integrated transfer demonstration with the ISRU unit and the Environmental Control and Life Support System test chamber in September of 2016. In addition to the development of

  9. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  10. A quick test of the WEP enabled by a sounding rocket

    Energy Technology Data Exchange (ETDEWEB)

    Reasenberg, Robert D; Patla, Biju R; Phillips, James D; Popescu, Eugeniu E; Rocco, Emanuele; Thapa, Rajesh [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Lorenzini, Enrico C, E-mail: reasenberg@cfa.harvard.edu [Faculty of Engineering, Universita di Padova, Padova I-35122 (Italy)

    2011-05-07

    We describe SR-POEM, a Galilean test of the weak equivalence principle (WEP), which is to be conducted during the free fall portion of a sounding rocket flight. This test of a single pair of substances is aimed at a measurement uncertainty of {sigma}({eta}) < 10{sup -16} after averaging the results of eight separate drops, each of 40 s duration. The WEP measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. We address the two sources of systematic error that are currently of greatest concern: magnetic force and electrostatic (patch effect) force on the test mass assemblies. The discovery of a violation ({eta} {ne} 0) would have profound implications for physics, astrophysics and cosmology.

  11. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  12. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  13. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    Science.gov (United States)

    Kauahikaua, J.

    A controlled source, time domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The geoelectric structure was determined as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves are qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered earth Marquardt inversion computer program. The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm meters.

  14. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  15. Underwater Sound Levels at a Wave Energy Device Testing Facility in Falmouth Bay, UK.

    Science.gov (United States)

    Garrett, Joanne K; Witt, Matthew J; Johanning, Lars

    2016-01-01

    Passive acoustic monitoring devices were deployed at FaBTest in Falmouth Bay, UK, a marine renewable energy device testing facility during trials of a wave energy device. The area supports considerable commercial shipping and recreational boating along with diverse marine fauna. Noise monitoring occurred during (1) a baseline period, (2) installation activity, (3) the device in situ with inactive power status, and (4) the device in situ with active power status. This paper discusses the preliminary findings of the sound recording at FabTest during these different activity periods of a wave energy device trial.

  16. Monitoring of aquifer pump tests with Magnetic Resonance Sounding (MRS): a synthetic case study

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Auken, E.; Bauer-Gottwein, Peter

    2011-01-01

    Magnetic Resonance Sounding (MRS) can provide valuable data to constrain and calibrate groundwater flow and transport models. With this non-invasive geophysical technique, measurements of water content and hydraulic conductivity can be obtained. We developed a hydrogeophyiscal forward method, which...... calculates the MRS-signal generated by an aquifer pump test. A synthetic MRS-dataset was subsequently used to determine the hydrogeological parameters in an inverse parameter estimation approach. This was done for a virtual pump test with a partially and a fully penetrating well. With the MRS data we were...

  17. Teaching Acoustic Properties of Materials in Secondary School: Testing Sound Insulators

    Science.gov (United States)

    Hernandez, M. I.; Couso, D.; Pinto, R.

    2011-01-01

    Teaching the acoustic properties of materials is a good way to teach physics concepts, extending them into the technological arena related to materials science. This article describes an innovative approach for teaching sound and acoustics in combination with sound insulating materials in secondary school (15-16-year-old students). Concerning the…

  18. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  19. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    International Nuclear Information System (INIS)

    Kauahikaua, J.

    1981-01-01

    A controlled source, time-domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The goal of this survey was the determination of the geoelectric structure as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high-level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves can be qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered-earth Marquardt inversion computer program (Kauahikaua, 1980). The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm-meters

  20. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  1. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  2. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  3. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  4. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  5. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  6. Sound absorption coefficient of coal bottom ash concrete for railway application

    Science.gov (United States)

    Ramzi Hannan, N. I. R.; Shahidan, S.; Maarof, Z.; Ali, N.; Abdullah, S. R.; Ibrahim, M. H. Wan

    2017-11-01

    A porous concrete able to reduce the sound wave that pass through it. When a sound waves strike a material, a portion of the sound energy was reflected back and another portion of the sound energy was absorbed by the material while the rest was transmitted. The larger portion of the sound wave being absorbed, the lower the noise level able to be lowered. This study is to investigate the sound absorption coefficient of coal bottom ash (CBA) concrete compared to the sound absorption coefficient of normal concrete by carried out the impedance tube test. Hence, this paper presents the result of the impedance tube test of the CBA concrete and normal concrete.

  7. Noise and noise disturbances from wind power plants - Tests with interactive control of sound parameters for more comfortable and less perceptible sounds

    International Nuclear Information System (INIS)

    Persson-Waye, K.; Oehrstroem, E.; Bjoerkman, M.; Agge, A.

    2001-12-01

    In experimental pilot studies, a methodology has been worked out for interactively varying sound parameters in wind power plants. In the tests, 24 persons varied the center frequency of different band-widths, the frequency of a sinus-tone and the amplitude-modulation of a sinus-tone in order to create as comfortable a sound as possible. The variations build on the noise from the two wind turbines Bonus and Wind World. The variations were performed with a constant dba level. The results showed that the majority preferred a low-frequency tone (94 Hz and 115 Hz for Wind World and Bonus, respectively). The mean of the most comfortable amplitude-modulation varied between 18 and 22 Hz, depending on the ground frequency. The mean of the center-frequency for the different band-widths varied from 785 to 1104 Hz. In order to study the influence of the wind velocity on the acoustic character of the noise, a long-time measurement program has been performed. A remotely controlled system has been developed, where wind velocity, wind direction, temperature and humidity are registered simultaneously with the noise. Long-time registrations have been performed for four different wing turbines

  8. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  9. Design and preliminary test results at Mach 5 of an axisymmetric slotted sound shield. [for supersonic wind tunnels (noise reduction in wind tunnel nozzles)

    Science.gov (United States)

    Beckwith, I. E.; Spokowski, A. J.; Harvey, W. D.; Stainback, P. C.

    1975-01-01

    The basic theory and sound attenuation mechanisms, the design procedures, and preliminary experimental results are presented for a small axisymmetric sound shield for supersonic wind tunnels. The shield consists of an array of small diameter rods aligned nearly parallel to the entrance flow with small gaps between the rods for boundary layer suction. Results show that at the lowest test Reynolds number (based on rod diameter) of 52,000 the noise shield reduced the test section noise by about 60 percent ( or 8 db attenuation) but no attenuation was measured for the higher range of test reynolds numbers from 73,000 to 190,000. These results are below expectations based on data reported elsewhere on a flat sound shield model. The smaller attenuation from the present tests is attributed to insufficient suction at the gaps to prevent feedback of vacuum manifold noise into the shielded test flow and to insufficient suction to prevent transition of the rod boundary layers to turbulent flow at the higher Reynolds numbers. Schlieren photographs of the flow are shown.

  10. Repeatability and reproducibility of in situ measurements of sound reflection and airborne sound insulation index of noise barriers

    NARCIS (Netherlands)

    Garai, M.; Schoen, E.; Behler, G.; Bragado, B.; Chudalla, M.; Conter, M.; Defrance, J.; Demizieux, P.; Glorieux, C.; Guidorzi, P.

    2014-01-01

    In Europe, in situ measurements of sound reflection and airborne sound insulation of noise barriers are usually done according to CEN/TS 1793-5. This method has been improved substantially during the EU funded QUIESST collaborative project. Within the same framework, an inter-laboratory test has

  11. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  12. Light aircraft sound transmission studies - Noise reduction model

    Science.gov (United States)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  13. Construction Of Critical Thinking Skills Test Instrument Related The Concept On Sound Wave

    Science.gov (United States)

    Mabruroh, F.; Suhandi, A.

    2017-02-01

    This study aimed to construct test instrument of critical thinking skills of high school students related the concept on sound wave. This research using a mixed methods with sequential exploratory design, consists of: 1) a preliminary study; 2) design and review of test instruments. The form of test instruments in essay questions, consist of 18 questions that was divided into 5 indicators and 8 sub-indicators of the critical thinking skills expressed by Ennis, with questions that are qualitative and contextual. Phases of preliminary study include: a) policy studies; b) survey to the school; c) and literature studies. Phases of the design and review of test instruments consist of two steps, namely a draft design of test instruments include: a) analysis of the depth of teaching materials; b) the selection of indicators and sub-indicators of critical thinking skills; c) analysis of indicators and sub-indicators of critical thinking skills; d) implementation of indicators and sub-indicators of critical thinking skills; and e) making the descriptions about the test instrument. In the next phase of the review test instruments, consist of: a) writing about the test instrument; b) validity test by experts; and c) revision of test instruments based on the validator.

  14. Tinnitus (Phantom Sound: Risk coming for future

    Directory of Open Access Journals (Sweden)

    Suresh Rewar

    2015-01-01

    Full Text Available The word 'tinnitus' comes from the Latin word tinnire, meaning “to ring” or “a ringing.” Tinnitus is the cognition of sound in the absence of any corresponding external sound. Tinnitus can take the form of continuous buzzing, hissing, or ringing, or a combination of these or other characteristics. Tinnitus affects 10% to 25% of the adult population. Tinnitus is classified as objective and subjective categories. Subjective tinnitus is meaningless sounds that are not associated with a physical sound and only the person who has the tinnitus can hear it. Objective tinnitus is the result of a sound that can be heard by the physician. Tinnitus is not a disease in itself but a common symptom, and because it involves the perception of sound or sounds, it is commonly associated with the hearing system. In fact, various parts of the hearing system, including the inner ear, are often responsible for this symptom. Tinnitus patients, which can lead to sleep disturbances, concentration problems, fatigue, depression, anxiety disorders, and sometimes even to suicide. The evaluation of tinnitus always begins with a thorough history and physical examination, with further testing performed when indicated. Diagnostic testing should include audiography, speech discrimination testing, computed tomography angiography, or magnetic resonance angiography should be performed. All patients with tinnitus can benefit from patient education and preventive measures, and oftentimes the physician's reassurance and assistance with the psychologic aftereffects of tinnitus can be the therapy most valuable to the patient. There are no specific medications for the treatment of tinnitus. Sedatives and some other medications may prove helpful in the early stages. The ultimate goal of neuro-imaging is to identify subtypes of tinnitus in order to better inform treatment strategies.

  15. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  16. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  17. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  18. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Report: Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2

    Science.gov (United States)

    Platt, R.

    1999-01-01

    This is the Performance Verification Report, Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The specification establishes the requirements for the Comprehensive Performance Test (CPT) and Limited Performance Test (LPT) of the Advanced Microwave Sounding, Unit-A2 (AMSU-A2), referred to herein as the unit. The unit is defined on Drawing 1331200. 1.2 Test procedure sequence. The sequence in which the several phases of this test procedure shall take place is shown in Figure 1, but the sequence can be in any order.

  19. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  20. Neogene and Quaternary geology of a stratigraphic test hole on Horn Island, Mississippi Sound

    Science.gov (United States)

    Gohn, Gregory S.; Brewster-Wingard, G. Lynn; Cronin, Thomas M.; Edwards, Lucy E.; Gibson, Thomas G.; Rubin, Meyer; Willard, Debra A.

    1996-01-01

    During April and May, 1991, the U.S. Geological Survey (USGS) drilled a 510-ft-deep, continuously cored, stratigraphic test hole on Horn Island, Mississippi Sound, as part of a field study of the Neogene and Quaternary geology of the Mississippi coastal area. The USGS drilled two new holes at the Horn Island site. The first hole was continuously cored to a depth of 510 ft; coring stopped at this depth due to mechanical problems. To facilitate geophysical logging, an unsampled second hole was drilled to a depth of 519 ft at the same location.

  1. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  2. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 108 2

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1 SIN 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  3. New perspectives on mechanisms of sound generation in songbirds

    DEFF Research Database (Denmark)

    Goller, Franz; Larsen, Ole Næsbye

    2002-01-01

    -tone mechanism similar to human phonation with the labia forming a pneumatic valve. The classical avian model proposed that vibrations of the thin medial tympaniform membranes are the primary sound generating mechanism. As a direct test of these two hypotheses we ablated the medial tympaniform membranes in two......The physical mechanisms of sound generation in the vocal organ, the syrinx, of songbirds have been investigated mostly with indirect methods. Recent direct endoscopic observation identified vibrations of the labia as the principal sound source. This model suggests sound generation in a pulse...... atmosphere) as well as direct (labial vibration during tonal sound) measurements of syringeal vibrations support a vibration-based soundgenerating mechanism even for tonal sounds....

  4. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, 08

    Science.gov (United States)

    Valdez, A.

    2000-01-01

    This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  5. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  6. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  7. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  8. Digital servo control of random sound fields

    Science.gov (United States)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  9. Physics of thermo-acoustic sound generation

    Science.gov (United States)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  10. Monitoring of aquifer pump tests with Magnetic Resonance Sounding (MRS)

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Auken, Esben; Bauer-Gottwein, Peter

    2009-01-01

    Magnetic Resonance Sounding (MRS) can provide valuable data to constrain and calibrate groundwater flow and transport models. With this non-invasive geophysical technique, field measurements of water content and hydraulic conductivities can be obtained. We developed a hydrogeophyiscal forward...

  11. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  12. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Tests of Dec 1999/Jan 2000 (S/O 784077, OC-454)

    Science.gov (United States)

    Heffner, R.

    2000-01-01

    This is the Engineering Test Report, AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Test of Dec 1999/Jan 2000 (S/O 784077, OC-454), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  13. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  14. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  15. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation. Beruehrungslose Pruefung von Beschichtungen mittels laserinduzierter Ultraschallanregung und holographischer Schallabbildung

    Energy Technology Data Exchange (ETDEWEB)

    Crostack, H A; Pohl, K Y [QZ-DO Qualitaetszentrum Dortmund GmbH und Co. KG (Germany); Radtke, U [Dortmund Univ. (Germany). Fachgebiet Qualitaetskontrolle

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO[sub 2] layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.).

  16. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  17. Analysis of acoustic sound signal for ONB measurement

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, H. I.; Han, K. Y.; Chai, H. T.; Park, C.

    2003-01-01

    The onset of nucleate boiling (ONB) was measured in a test fuel bundle composed of several fuel element simulators (FES) by analysing the aquatic sound signals. In order measure ONBs, a hydrophone, a pre-amplifier, and a data acquisition system to acquire/process the aquatic signal was prepared. The acoustic signal generated in the coolant is converted to the current signal through the microphone. When the signal is analyzed in the frequency domain, each sound signal can be identified according to its origin of sound source. As the power is increased to a certain degree, a nucleate boiling is started. The frequent formation and collapse of the void bubbles produce sound signal. By measuring this sound signal one can pinpoint the ONB. Since the signal characteristics is identical for different mass flow rates, this method can be applicable for ascertaining ONB

  18. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  19. Sound level measurements using smartphone "apps": Useful or inaccurate?

    Directory of Open Access Journals (Sweden)

    Daniel R Nast

    2014-01-01

    Full Text Available Many recreational activities are accompanied by loud concurrent sounds and decisions regarding the hearing hazards associated with these activities depend on accurate sound measurements. Sound level meters (SLMs are designed for this purpose, but these are technical instruments that are not typically available in recreational settings and require training to use properly. Mobile technology has made such sound level measurements more feasible for even inexperienced users. Here, we assessed the accuracy of sound level measurements made using five mobile phone applications or "apps" on an Apple iPhone 4S, one of the most widely used mobile phones. Accuracy was assessed by comparing application-based measurements to measurements made using a calibrated SLM. Whereas most apps erred by reporting higher sound levels, one application measured levels within 5 dB of a calibrated SLM across all frequencies tested.

  20. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  1. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  2. Realtime synthesized sword-sounds in Wii computer games

    DEFF Research Database (Denmark)

    Böttcher, Niels

    This paper presents the current work carried out on an interactive sword fighting game, developed for the Wii controller. The aim of the work is to develop highly interactive action-sound, which is closely mapped to the physical actions of the player. The interactive sword sound is developed using...... a combination of granular synthesis and subtractive synthesis simulating wind. The aim of the work is to test if more interactive sound can affect the way humans interact physically with their body, when playing games with controllers such as the Wii remote....

  3. From acoustic descriptors to evoked quality of car door sounds.

    Science.gov (United States)

    Bezat, Marie-Céline; Kronland-Martinet, Richard; Roussarie, Vincent; Ystad, Sølvi

    2014-07-01

    This article describes the first part of a study aiming at adapting the mechanical car door construction to the drivers' expectancies in terms of perceived quality of cars deduced from car door sounds. A perceptual cartography of car door sounds is obtained from various listening tests aiming at revealing both ecological and analytical properties linked to evoked car quality. In the first test naive listeners performed absolute evaluations of five ecological properties (i.e., solidity, quality, weight, closure energy, and success of closure). Then experts in the area of automobile doors categorized the sounds according to organic constituents (lock, joints, door panel), in particular whether or not the lock mechanism could be perceived. Further, a sensory panel of naive listeners identified sensory descriptors such as classical descriptors or onomatopoeia that characterize the sounds, hereby providing an analytic description of the sounds. Finally, acoustic descriptors were calculated after decomposition of the signal into a lock and a closure component by the Empirical Mode Decomposition (EMD) method. A statistical relationship between the acoustic descriptors and the perceptual evaluations of the car door sounds could then be obtained through linear regression analysis.

  4. Leading edge effect in laminar boundary layer excitation by sound

    International Nuclear Information System (INIS)

    Leehey, P.; Shapiro, P.

    1980-01-01

    Essentially plane pure tone sound waves were directed downstream over a heavily damped smooth flat plate installed in a low turbulence (0.04%) subsonic wind tunnel. Laminar boundary layer disturbance growth rates were measured with and without sound excitation and compared with numerical results from spatial stability theory. The data indicate that the sound field and Tollmien-Schlichting (T-S) waves coexist with comparable amplitudes when the latter are damped; moreover, the response is linear. Higher early growth rates occur for excitation by sound than by stream turbulence. Theoretical considerations indicate that the boundary layer is receptive to sound excitation primarily at the test plate leading edge. (orig.)

  5. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  6. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  7. The Process of Optimizing Mechanical Sound Quality in Product Design

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Holst, Thomas

    2011-01-01

    The research field concerning optimizing product sound quality is a relatively unexplored area, and may become difficult for designers to operate in. To some degree, sound is a highly subjective parameter, which is normally targeted sound specialists. This paper describes the theoretical...... and practical background for managing a process of optimizing the mechanical sound quality in a product design by using simple tools and workshops systematically. The procedure is illustrated by a case study of a computer navigation tool (computer mouse or mouse). The process is divided into 4 phases, which...... clarify the importance of product sound, defining perceptive demands identified by users, and, finally, how to suggest mechanical principles for modification of an existing sound design. The optimized mechanical sound design is followed by tests on users of the product in its use context. The result...

  8. Augmenting the Sound Experience at Music Festivals using Mobile Phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Stopczynski, Arkadiusz; Larsen, Jan

    2011-01-01

    In this paper we describe experiments carried out at the Nibe music festival in Denmark involving the use of mobile phones to augment the participants' sound experience at the concerts. The experiments involved N=19 test participants that used a mobile phone with a headset playing back sound...... “in-the-wild” experiments augmenting the sound experience at two concerts at this music festival....

  9. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  10. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  11. Free Flight Ground Testing of ADEPT in Advance of the Sounding Rocket One Flight Experiment

    Science.gov (United States)

    Smith, B. P.; Dutta, S.

    2017-01-01

    The Adaptable Deployable Entry and Placement Technology (ADEPT) project will be conducting the first flight test of ADEPT, titled Sounding Rocket One (SR-1), in just two months. The need for this flight test stems from the fact that ADEPT's supersonic dynamic stability has not yet been characterized. The SR-1 flight test will provide critical data describing the flight mechanics of ADEPT in ballistic flight. These data will feed decision making on future ADEPT mission designs. This presentation will describe the SR-1 scientific data products, possible flight test outcomes, and the implications of those outcomes on future ADEPT development. In addition, this presentation will describe free-flight ground testing performed in advance of the flight test. A subsonic flight dynamics test conducted at the Vertical Spin Tunnel located at NASA Langley Research Center provided subsonic flight dynamics data at high and low altitudes for multiple center of mass (CoM) locations. A ballistic range test at the Hypervelocity Free Flight Aerodynamics Facility (HFFAF) located at NASA Ames Research Center provided supersonic flight dynamics data at low supersonic Mach numbers. Execution and outcomes of these tests will be discussed. Finally, a hypothesized trajectory estimate for the SR-1 flight will be presented.

  12. The relationship between target quality and interference in sound zones

    DEFF Research Database (Denmark)

    Baykaner, Khan; Coleman, Phillip; Mason, Russell

    2015-01-01

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced...... audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity...

  13. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  14. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  15. Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.

    Science.gov (United States)

    Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D

    2016-01-01

    To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.

  16. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  17. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  18. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  19. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  20. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  1. From the Bob/Kirk effect to the Benoit/Éric effect: Testing the mechanism of name sound symbolism in two languages.

    Science.gov (United States)

    Sidhu, David M; Pexman, Penny M; Saint-Aubin, Jean

    2016-09-01

    Although it is often assumed that language involves an arbitrary relationship between form and meaning, many studies have demonstrated that nonwords like maluma are associated with round shapes, while nonwords like takete are associated with sharp shapes (i.e., the Maluma/Takete effect, Köhler, 1929/1947). The majority of the research on sound symbolism has used nonwords, but Sidhu and Pexman (2015) recently extended this effect to existing labels: real English first names (i.e., the Bob/Kirk effect). In the present research we tested whether the effects of name sound symbolism generalize to French speakers (Experiment 1) and French names (Experiment 2). In addition, we assessed the underlying mechanism of name sound symbolism, investigating the roles of phonology and orthography in the effect. Results showed that name sound symbolism does generalize to French speakers and French names. Further, this robust effect remained the same when names were presented in a curved vs. angular font (Experiment 3), or when the salience of orthographic information was reduced through auditory presentation (Experiment 4). Together these results suggest that the Bob/Kirk effect is pervasive, and that it is based on fundamental features of name phonemes. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Recycling ceramic industry wastes in sound absorbing materials

    Directory of Open Access Journals (Sweden)

    C. Arenas

    2016-10-01

    Full Text Available The scope of this investigation is to develop a material mainly composed (80% w/w of ceramic wastes that can be applied in the manufacture of road traffic noise reducing devices. The characterization of the product has been carried out attending to its acoustic, physical and mechanical properties, by measuring the sound absorption coefficient at normal incidence, the open void ratio, density and compressive strength. Since the sound absorbing behavior of a porous material is related to the size of the pores and the thickness of the specimen tested, the influence of the particle grain size of the ceramic waste and the thickness of the samples tested on the properties of the final product has been analyzed. The results obtained have been compared to a porous concrete made of crushed granite aggregate as a reference commercial material traditionally used in similar applications. Compositions with coarse particles showed greater sound absorption properties than compositions made with finer particles, besides presenting better sound absorption behavior than the reference porous concrete. Therefore, a ceramic waste-based porous concrete can be potentially recycled in the highway noise barriers field.

  3. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  4. Puget Sound Tidal Energy In-Water Testing and Development Project Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Collar, Craig W

    2012-11-16

    Tidal energy represents potential for the generation of renewable, emission free, environmentally benign, and cost effective energy from tidal flows. A successful tidal energy demonstration project in Puget Sound, Washington may enable significant commercial development resulting in important benefits for the northwest region and the nation. This project promoted the United States Department of Energy's Wind and Hydropower Technologies Program's goals of advancing the commercial viability, cost-competitiveness, and market acceptance of marine hydrokinetic systems. The objective of the Puget Sound Tidal Energy Demonstration Project is to conduct in-water testing and evaluation of tidal energy technology as a first step toward potential construction of a commercial-scale tidal energy power plant. The specific goal of the project phase covered by this award was to conduct all activities necessary to complete engineering design and obtain construction approvals for a pilot demonstration plant in the Admiralty Inlet region of the Puget Sound. Public Utility District No. 1 of Snohomish County (The District) accomplished the objectives of this award through four tasks: Detailed Admiralty Inlet Site Studies, Plant Design and Construction Planning, Environmental and Regulatory Activities, and Management and Reporting. Pre-Installation studies completed under this award provided invaluable data used for site selection, environmental evaluation and permitting, plant design, and construction planning. However, these data gathering efforts are not only important to the Admiralty Inlet pilot project. Lessons learned, in particular environmental data gathering methods, can be applied to future tidal energy projects in the United States and other parts of the world. The District collaborated extensively with project stakeholders to complete the tasks for this award. This included Federal, State, and local government agencies, tribal governments, environmental groups, and

  5. Sound absorption study on acoustic panel from kapok fiber and egg tray

    Science.gov (United States)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  6. Development of a Student-Centered Instrument to Assess Middle School Students' Conceptual Understanding of Sound

    Science.gov (United States)

    Eshach, Haim

    2014-01-01

    This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound…

  7. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  8. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  9. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  10. Brief report: sound output of infant humidifiers.

    Science.gov (United States)

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  11. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  12. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  13. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  14. Bubbles that Change the Speed of Sound

    Science.gov (United States)

    Planinšič, Gorazd; Etkina, Eugenia

    2012-11-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."2 In this paper we describe a simple and robust experiment that allows an easy audio and visual demonstration of the same effect (unfortunately without the chocolate) and offers several possibilities for student investigations. In addition to the demonstration of the above effect, the experiments described below provide an excellent opportunity for students to devise and test explanations with simple equipment.

  15. Basic live sound reinforcement a practical guide for starting live audio

    CERN Document Server

    Biederman, Raven

    2013-01-01

    Access and interpret manufacturer spec information, find shortcuts for plotting measure and test equations, and learn how to begin your journey towards becoming a live sound professional. Land and perform your first live sound gigs with this guide that gives you just the right amount of information. Don't get bogged down in details intended for complex and expensive equipment and Madison Square Garden-sized venues. Basic Live Sound Reinforcement is a handbook for audio engineers and live sound enthusiasts performing in small venues from one-mike coffee shops to clubs. With their combined ye

  16. Transformer sound level caused by core magnetostriction and winding stress displacement variation

    Directory of Open Access Journals (Sweden)

    Chang-Hung Hsu

    2017-05-01

    Full Text Available Magnetostriction caused by the exciting variation of the magnetic core and the current conducted by the winding wired to the core has a significant result impact on a power transformer. This paper presents the sound of a factory transformer before on-site delivery for no-load tests. This paper also discusses the winding characteristics from the transformer full-load tests. The simulation and the measurement for several transformers with capacities ranging from 15 to 60 MVA and high voltage 132kV to low voltage 33 kV are performed. This study compares the sound levels for transformers by no-load test (core/magnetostriction and full-load test (winding/displacement ε. The difference between the simulated and the measured sound levels is about 3dB. The results show that the sound level depends on several parameters, including winding displacement, capacity, mass of the core and windings. Comparative results of magnetic induction of cores and the electromagnetic force of windings for no-load and full-load conditions are examined.

  17. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  18. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  19. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  20. Sound at the zoo: Using animal monitoring, sound measurement, and noise reduction in zoo animal management.

    Science.gov (United States)

    Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D

    2017-05-01

    A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.

  1. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  2. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  3. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  4. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  5. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  6. Xinyinqin: a computer-based heart sound simulator.

    Science.gov (United States)

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  7. Four odontocete species change hearing levels when warned of impending loud sound.

    Science.gov (United States)

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  8. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  9. Effect of thermal-treatment sequence on sound absorbing and mechanical properties of porous sound-absorbing/thermal-insulating composites

    Directory of Open Access Journals (Sweden)

    Huang Chen-Hung

    2016-01-01

    Full Text Available Due to recent rapid commercial and industrial development, mechanical equipment is supplemented massively in the factory and thus mechanical operation causes noise which distresses living at home. In livelihood, neighborhood, transportation equipment, jobsite construction noises impact on quality of life not only factory noise. This study aims to preparation technique and property evaluation of porous sound-absorbing/thermal-insulating composites. Hollow three-dimensional crimp PET fibers blended with low-melting PET fibers were fabricated into hollow PET/low-melting PET nonwoven after opening, blending, carding, lapping and needle-bonding process. Then, hollow PET/low-melting PET nonwovens were laminated into sound-absorbing/thermal-insulating composites by changing sequence of needle-bonding and thermal-treatment. The optimal thermal-treated sequence was found by tensile strength, tearing strength, sound-absorbing coefficient and thermal conductivity coefficient tests of porous composites.

  10. Phonaesthemes and sound symbolism in Swedish brand names

    Directory of Open Access Journals (Sweden)

    Åsa Abelin

    2015-01-01

    Full Text Available This study examines the prevalence of sound symbolism in Swedish brand names. A general principle of brand name design is that effective names should be distinctive, recognizable, easy to pronounce and meaningful. Much money is invested in designing powerful brand names, where the emotional impact of the names on consumers is also relevant and it is important to avoid negative connotations. Customers prefer brand names, which say something about the product, as this reduces product uncertainty (Klink, 2001. Therefore, consumers might prefer sound symbolic names. It has been shown that people associate the sounds of the nonsense words maluma and takete with round and angular shapes, respectively. By extension, more complex shapes and textures might activate words containing certain sounds. This study focuses on semantic dimensions expected to be relevant to product names, such as mobility, consistency, texture and shape. These dimensions are related to the senses of sight, hearing and touch and are also interesting from a cognitive linguistic perspective. Cross-modal assessment and priming experiments with pictures and written words were performed and the results analysed in relation to brand name databases and to sound symbolic sound combinations in Swedish (Abelin, 1999. The results show that brand names virtually never contain pejorative, i.e. depreciatory, consonant clusters, and that certain sounds and sound combinations are overrepresented in certain content categories. Assessment tests show correlations between pictured objects and phoneme combinations in newly created words (non-words. The priming experiment shows that object images prime newly created words as expected, based on the presence of compatible consonant clusters.

  11. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...

  12. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  13. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  14. Enhancement of acoustical performance of hollow tube sound absorber

    International Nuclear Information System (INIS)

    Putra, Azma; Khair, Fazlin Abd; Nor, Mohd Jailani Mohd

    2016-01-01

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  15. Enhancement of acoustical performance of hollow tube sound absorber

    Energy Technology Data Exchange (ETDEWEB)

    Putra, Azma, E-mail: azma.putra@utem.edu.my; Khair, Fazlin Abd, E-mail: fazlinabdkhair@student.utem.edu.my; Nor, Mohd Jailani Mohd, E-mail: jai@utem.edu.my [Centre for Advanced Research on Energy, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, Durian Tunggal Melaka 76100 Malaysia (Malaysia)

    2016-03-29

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  16. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  17. Color improves ‘visual’ acuity via sound

    Directory of Open Access Journals (Sweden)

    Shelly eLevy-Tzedek

    2014-11-01

    Full Text Available Visual-to-auditory sensory substitution devices (SSDs convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location and color information into musical notes. We tested the 'visual' acuity of 23 individuals (13 blind and 10 blindfolded sighted on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of the letter ‘E’. The test was repeated twice: in one test, the letter ‘E’ was drawn with a single color (white, and in the other test, with two colors (red and white. In the latter case, the vertical line in the letter, when upright, was drawn in red, with the three horizontal lines drawn in white. We found no significant differences in performance between the blind and the sighted groups. We found a significant effect of the added color on the ‘visual’ acuity. The highest acuity participants reached in the monochromatic test was 20/800, whereas with the added color, acuity doubled to 20/400. We conclude that color improves 'visual' acuity via sound.

  18. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  19. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  20. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Reports: Final Comprehensive Performance Test Report, P/N: 1356006-1, S.N: 202/A2

    Science.gov (United States)

    Platt, R.

    1998-01-01

    This is the Performance Verification Report. the process specification establishes the requirements for the comprehensive performance test (CPT) and limited performance test (LPT) of the earth observing system advanced microwave sounding unit-A2 (EOS/AMSU-A2), referred to as the unit. The unit is defined on drawing 1356006.

  1. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  2. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  3. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  4. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  5. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  6. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  7. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  8. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  9. The development of infants' use of property-poor sounds to individuate objects.

    Science.gov (United States)

    Wilcox, Teresa; Smith, Tracy R

    2010-12-01

    There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox, Woods, Tuggy, & Napoli, 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. A further test of relevance of ASEL and CSEL in the determination of the rating sound level for shooting sounds

    NARCIS (Netherlands)

    Vos, J.

    1998-01-01

    In a previous study on the annoyance caused by shooting sounds [Proceedings Internoise '96, Vol. 5, 2231-2236], it was shown that an almost perfect prediction of the annoyance, as rated indoors with the windows closed, was obtained on the basis of the weighted sum of the outdoor A-weighted and

  12. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  13. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  14. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  15. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  16. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  17. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  18. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  19. Sound field separation with cross measurement surfaces.

    Directory of Open Access Journals (Sweden)

    Jin Mao

    Full Text Available With conventional near-field acoustical holography, it is impossible to identify sound pressure when the coherent sound sources are located on the same side of the array. This paper proposes a solution, using cross measurement surfaces to separate the sources based on the equivalent source method. Each equivalent source surface is built in the center of the corresponding original source with a spherical surface. According to the different transfer matrices between equivalent sources and points on holographic surfaces, the weighting of each equivalent source from coherent sources can be obtained. Numerical and experimental studies have been performed to test the method. For the sound pressure including noise after separation in the experiment, the calculation accuracy can be improved by reconstructing the pressure with Tikhonov regularization and the L-curve method. On the whole, a single source can be effectively separated from coherent sources using cross measurement.

  20. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Performance Verification Report: Final Comprehensive Performance Test Report, P/N 1331720-2TST, S/N 105/A1

    Science.gov (United States)

    Platt, R.

    1999-01-01

    This is the Performance Verification Report, Final Comprehensive Performance Test (CPT) Report, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). This specification establishes the requirements for the CPT and Limited Performance Test (LPT) of the AMSU-1A, referred to here in as the unit. The sequence in which the several phases of this test procedure shall take place is shown.

  1. Sound absorption effects in a rectangular enclosure with the foamed aluminum sheet absorber

    International Nuclear Information System (INIS)

    Oh, Jae Eung; Chung, Jin Tai; Kim, Sang Hun; Chung, Kyung Ryul

    1998-01-01

    For the purpose of finding out the optimal thickness of sound absorber and the sound absorption effects due to the selected thickness at an interested frequency range, the analytical study identifies the interior and exterior sound field characteristics of a rectangular enclosure with foamed aluminum lining and the experimental verification is performed with random noise input. By using a two-microphone impedance tube, we measure experimentally the absorption coefficient and the impedance of simple sound absorbing materials. Measured acoustical parameters of the test samples are applied to the theoretical analysis to predict sound pressure field in the cavity. The sound absorption effects from measurements are compared to predicted ones in both cases with and without foamed aluminum lining in the cavity of the rectangular enclosure

  2. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  3. Heat of combustion, sound speed and component fluctuations in natural gas

    International Nuclear Information System (INIS)

    Burstein, L.; Ingman, D.

    1998-01-01

    The heat of combustion and sound speed of natural gas were studied as a function of random fluctuation of the gas fractions. A method of sound speed determination was developed and used for over 50,000 possible variants of component concentrations in four- and five- component mixtures. A test on binary (methane-ethane) and multicomponent (Gulf Coast) gas mixtures under standard pressure and moderate temperatures shows satisfactory predictability of sound speed on the basis of the binary virial coefficients, sound speeds and heat capacities of the pure components. Uncertainty in the obtained values does not exceed that of the pure component data. The results of comparison between two natural gas mixtures - with and without nonflammable components - are reported

  4. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  5. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  6. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  8. Sound absorption and morphology characteristic of porous concrete paving blocks

    Science.gov (United States)

    Halim, N. H. Abd; Nor, H. Md; Ramadhansyah, P. J.; Mohamed, A.; Hassan, N. Abdul; Ibrahim, M. H. Wan; Ramli, N. I.; Nazri, F. Mohamed

    2017-11-01

    In this study, sound absorption and morphology characteristic of Porous Concrete Paving Blocks (PCPB) at different sizes of coarse aggregate were presented. Three different sizes of coarse aggregate were used; passing 10 mm retained 5 mm (as Control), passing 8 mm retained 5 mm (8 - 5) and passing 10 mm retained 8 mm (10 - 8). The sound absorption test was conducted through the impedance tube at different frequency. It was found that the size of coarse aggregate affects the level of absorption of the specimens. It also shows that PCPB 10 - 8 resulted in high sound absorption compared to the other blocks. On the other hand, microstructure morphology of PCPB shows a clearer version of existing micro-cracks and voids inside the specimens which affecting the results of sound absorption.

  9. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  10. Experimental investigation of sound absorption of acoustic wedges for anechoic chambers

    Science.gov (United States)

    Belyaev, I. V.; Golubev, A. Yu.; Zverev, A. Ya.; Makashov, S. Yu.; Palchikovskiy, V. V.; Sobolev, A. F.; Chernykh, V. V.

    2015-09-01

    The results of measuring the sound absorption by acoustic wedges, which were performed in AC-3 and AC-11 reverberation chambers at the Central Aerohydrodynamic Institute (TsAGI), are presented. Wedges of different densities manufactured from superfine basaltic and thin mineral fibers were investigated. The results of tests of these wedges were compared to the sound absorption of wedges of the operating AC-2 anechoic facility at TsAGI. It is shown that basaltic-fiber wedges have better sound-absorption characteristics than the investigated analogs and can be recommended for facing anechoic facilities under construction.

  11. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  12. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  13. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  14. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  15. Producing of Impedance Tube for Measurement of Acoustic Absorption Coefficient of Some Sound Absorber Materials

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi

    2008-04-01

    Full Text Available Introduction & Objective: Noise is one of the most important harmful agents in work environment. In spit of industrial improvements, exposure with over permissible limit of noise is counted as one of the health complication of workers. In Iran, do not exact information of the absorption coefficient of acoustic materials. Iranian manufacturer have not laboratory for measured of sound absorbance of their products, therefore using of sound absorber is limited for noise control in industrial and non industrial constructions. The goal of this study was to design an impedance tube based on pressure method for measurement of the sound absorption coefficient of acoustic materials.Materials & Methods: In this study designing of measuring system and method of calculation of sound absorption based on a available equipment and relatively easy for measurement of the sound absorption coefficient related to ISO10534-1 was performed. Measuring system consist of heavy asbestos tube, a pure tone sound generator, calibrated sound level meter for measuring of some commonly of sound absorber materials was used. Results: In this study sound absorption coefficient of 23 types of available acoustic material in Iran was tested. Reliability of results by three repeat of measurement was tested. Results showed that the standard deviation of sound absorption coefficient of study materials was smaller than .Conclusion: The present study performed a necessary technology of designing and producing of impedance tube for determining of acoustical materials absorption coefficient in Iran.

  16. Sound propagation in elongated superfluid fermionic clouds

    International Nuclear Information System (INIS)

    Capuzzi, P.; Vignolo, P.; Federici, F.; Tosi, M. P.

    2006-01-01

    We use hydrodynamic equations to study sound propagation in a superfluid Fermi gas at zero temperature inside a strongly elongated cigar-shaped trap, with main attention to the transition from the BCS to the unitary regime. First, we treat the role of the radial density profile in the limit of a cylindrical geometry and then evaluate numerically the effect of the axial confinement in a configuration in which a hole is present in the gas density at the center of the trap. We find that in a strongly elongated trap the speed of sound in both the BCS and the unitary regime differs by a factor √(3/5) from that in a homogeneous three-dimensional superfluid. The predictions of the theory could be tested by measurements of sound-wave propagation in a setup such as that exploited by Andrews et al. [Phys. Rev. Lett. 79, 553 (1997)] for an atomic Bose-Einstein condensate

  17. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  18. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  19. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  20. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  1. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  2. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  3. National Report on the NASA Sounding Rocket and Balloon Programs

    Science.gov (United States)

    Eberspeaker, Philip; Fairbrother, Debora

    2013-01-01

    The U. S. National Aeronautics and Space Administration (NASA) Sounding Rockets and Balloon Programs conduct a total of 30 to 40 missions per year in support of the NASA scientific community and other users. The NASA Sounding Rockets Program supports the science community by integrating their experiments into the sounding rocket payloads, and providing both the rocket vehicle and launch operations services. Activities since 2011 have included two flights from Andoya Rocket Range, more than eight flights from White Sands Missile Range, approximately sixteen flights from Wallops Flight Facility, two flights from Poker Flat Research Range, and four flights from Kwajalein Atoll. Other activities included the final developmental flight of the Terrier-Improved Malemute launch vehicle, a test flight of the Talos-Terrier-Oriole launch vehicle, and a host of smaller activities to improve program support capabilities. Several operational missions have utilized the new Terrier-Malemute vehicle. The NASA Sounding Rockets Program is currently engaged in the development of a new sustainer motor known as the Peregrine. The Peregrine development effort will involve one static firing and three flight tests with a target completion data of August 2014. The NASA Balloon Program supported numerous scientific and developmental missions since its last report. The program conducted flights from the U.S., Sweden, Australia, and Antarctica utilizing standard and experimental vehicles. Of particular note are the successful test flights of the Wallops Arc Second Pointer (WASP), the successful demonstration of a medium-size Super Pressure Balloon (SPB), and most recently, three simultaneous missions aloft over Antarctica. NASA continues its successful incremental design qualification program and will support a science mission aboard WASP in late 2013 and a science mission aboard the SPB in early 2015. NASA has also embarked on an intra-agency collaboration to launch a rocket from a balloon to

  4. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  5. Design, development and test of the gearbox condition monitoring system using sound signal processing

    Directory of Open Access Journals (Sweden)

    M Zamani

    2016-09-01

    format and MATLAB R2014a software used for data processing. Data processing: Signal processing method in the frequency domain is used in order to reveal the defects. Fast Fourier Transform: Fast Fourier Transform FFT for application in electronic equipment specially analyzers have great importance. In this condition, sampling number is chosen exponentially as 2N which decreases the calculation volume significantly. Determination of defect kind of gearwheel using frequency spectrum analysis: In mentioned gearwheel, errors were generated synthetically. Defect kind of these errors was generated in separate gearwheels in order to investigate the defects more precisely and a gearwheel was considered as control gearwheel. Despite of this, the sound of all of the gearwheels in correct condition was stored. Results and Discussion Comparison of processed acoustic signals from gearwheels of gearbox in two correct and incorrect conditions was indicative of gearwheel involvement, frequency, their harmony and the changes resulted from defects. Gearwheel defect detection tests showed that at the speeds of 1496, 1050 and 749 rpm, investigated defects are recognizable with a comparison of the frequency spectrum of obtained signals in correct and incorrect conditions and according to the involvement frequency of gearwheel, its harmony and sided spectrum. Results of the frequency spectrum of signal analysis in speed of 1496 rpm pinion showed the defect of one tooth fracture in involvement frequency of gearwheels by 489, 350 and 249 Hz respectively which became apparent with a mentioned frequency domain increment. A worn tooth defect in a gearwheel was completely determinable as sided bands with equal distance around gearwheel involvement frequency in the signal frequency determination of the speeds of 1496 and 105 rpm pinion, but became a bit harder in less speeds. Investigation of frequency spectrum of acoustic signal resulted from gearwheel, is indicative of the ability of this method

  6. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  7. Introduction to the Special Issue on Sounding Rockets and Instrumentation

    OpenAIRE

    Christe, Steven; Zeiger, Ben; Pfaff, Rob; Garcia, Michael

    2016-01-01

    Rocket technology, originally developed for military applications, has provided a low-cost observing platform to carry critical and rapid-response scientific investigations for over 70 years. Even with the development of launch vehicles that could put satellites into orbit, high altitude sounding rockets have remained relevant. In addition to science observations, sounding rockets provide a unique technology test platform and a valuable training ground for scientists and engineers. Most impor...

  8. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  9. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  10. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  11. Distraction by novel and pitch-deviant sounds in children

    Directory of Open Access Journals (Sweden)

    Nicole Wetzel

    2016-12-01

    Full Text Available The control of attention is an important part of our executive functions and enables us to focus on relevant information and to ignore irrelevant information. The ability to shield against distraction by task-irrelevant sounds is suggested to mature during school age. The present study investigated the developmental time course of distraction in three groups of children aged 7 – 10 years. Two different types of distractor sounds that have been frequently used in auditory attention research – novel environmental and pitch-deviant sounds – were presented within an oddball paradigm while children performed a visual categorization task. Reaction time measurements revealed decreasing distractor-related impairment with age. Novel environmental sounds impaired performance in the categorization task more than pitch-deviant sounds. The youngest children showed a pronounced decline of novel-related distraction effects throughout the experimental session. Such a significant decline as a result of practice was not observed in the pitch-deviant condition and not in older children. We observed no correlation between cross-modal distraction effects and performance in standardized tests of concentration and visual distraction. Results of the cross-modal distraction paradigm indicate that separate mechanisms underlying the processing of novel environmental and pitch-deviant sounds develop with different time courses and that these mechanisms develop considerably within a few years in middle childhood.

  12. Equivalent threshold sound pressure levels for acoustic test signals of short duration

    DEFF Research Database (Denmark)

    Poulsen, Torben; Daugaard, Carsten

    1998-01-01

    . The measurements were performed with two types of headphones, Telephonics TDH-39 and Sennheiser HDA-200. The sound pressure levels were measured in an IEC 318 ear simulator with Type 1 adapter (a flat plate) and a conical ring. The audiometric methods used in the experiments were the ascending method (ISO 8253...

  13. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  14. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  15. Similarities between the irrelevant sound effect and the suffix effect.

    Science.gov (United States)

    Hanley, J Richard; Bourgaize, Jake

    2018-03-29

    Although articulatory suppression abolishes the effect of irrelevant sound (ISE) on serial recall when sequences are presented visually, the effect persists with auditory presentation of list items. Two experiments were designed to test the claim that, when articulation is suppressed, the effect of irrelevant sound on the retention of auditory lists resembles a suffix effect. A suffix is a spoken word that immediately follows the final item in a list. Even though participants are told to ignore it, the suffix impairs serial recall of auditory lists. In Experiment 1, the irrelevant sound consisted of instrumental music. The music generated a significant ISE that was abolished by articulatory suppression. It therefore appears that, when articulation is suppressed, irrelevant sound must contain speech for it to have any effect on recall. This is consistent with what is known about the suffix effect. In Experiment 2, the effect of irrelevant sound under articulatory suppression was greater when the irrelevant sound was spoken by the same voice that presented the list items. This outcome is again consistent with the known characteristics of the suffix effect. It therefore appears that, when rehearsal is suppressed, irrelevant sound disrupts the acoustic-perceptual encoding of auditorily presented list items. There is no evidence that the persistence of the ISE under suppression is a result of interference to the representation of list items in a postcategorical phonological store.

  16. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  17. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  18. Use of Authentic-Speech Technique for Teaching Sound Recognition to EFL Students

    Science.gov (United States)

    Sersen, William J.

    2011-01-01

    The main objective of this research was to test an authentic-speech technique for improving the sound-recognition skills of EFL (English as a foreign language) students at Roi-Et Rajabhat University. The secondary objective was to determine the correlation, if any, between students' self-evaluation of sound-recognition progress and the actual…

  19. Hybrid waste filler filled bio-polymer foam composites for sound absorbent materials

    Science.gov (United States)

    Rus, Anika Zafiah M.; Azahari, M. Shafiq M.; Kormin, Shaharuddin; Soon, Leong Bong; Zaliran, M. Taufiq; Ahraz Sadrina M. F., L.

    2017-09-01

    Sound absorption materials are one of the major requirements in many industries with regards to the sound insulation developed should be efficient to reduce sound. This is also important to contribute in economically ways of producing sound absorbing materials which is cheaper and user friendly. Thus, in this research, the sound absorbent properties of bio-polymer foam filled with hybrid fillers of wood dust and waste tire rubber has been investigated. Waste cooking oil from crisp industries was converted into bio-monomer, filled with different proportion ratio of fillers and fabricated into bio-polymer foam composite. Two fabrication methods is applied which is the Close Mold Method (CMM) and Open Mold Method (OMM). A total of four bio-polymer foam composite samples were produce for each method used. The percentage of hybrid fillers; mixture of wood dust and waste tire rubber of 2.5 %, 5.0%, 7.5% and 10% weight to weight ration with bio-monomer. The sound absorption of the bio-polymer foam composites samples were tested by using the impedance tube test according to the ASTM E-1050 and Scanning Electron Microscope to determine the morphology and porosity of the samples. The sound absorption coefficient (α) at different frequency range revealed that the polymer foam of 10.0 % hybrid fillers shows highest α of 0.963. The highest hybrid filler loading contributing to smallest pore sizes but highest interconnected pores. This also revealed that when highly porous material is exposed to incident sound waves, the air molecules at the surface of the material and within the pores of the material are forced to vibrate and loses some of their original energy. This is concluded that the suitability of bio-polymer foam filled with hybrid fillers to be used in acoustic application of automotive components such as dashboards, door panels, cushion and etc.

  20. Sex-specific asymmetries in communication sound perception are not related to hand preference in an early primate

    Directory of Open Access Journals (Sweden)

    Scheumann Marina

    2008-01-01

    Full Text Available Abstract Background Left hemispheric dominance of language processing and handedness, previously thought to be unique to humans, is currently under debate. To gain an insight into the origin of lateralization in primates, we have studied gray mouse lemurs, suggested to represent the most ancestral primate condition. We explored potential functional asymmetries on the behavioral level by applying a combined handedness and auditory perception task. For testing handedness, we used a forced food-grasping task. For testing auditory perception, we adapted the head turn paradigm, originally established for exploring hemispheric specializations in conspecific sound processing in Old World monkeys, and exposed 38 subjects to control sounds and conspecific communication sounds of positive and negative emotional valence. Results The tested mouse lemur population did not show an asymmetry in hand preference or in orientation towards conspecific communication sounds. However, males, but not females, exhibited a significant right ear-left hemisphere bias when exposed to conspecific communication sounds of negative emotional valence. Orientation asymmetries were not related to hand preference. Conclusion Our results provide the first evidence for sex-specific asymmetries for conspecific communication sound perception in non-human primates. Furthermore, they suggest that hemispheric dominance for communication sound processing evolved before handedness and independently from each other.

  1. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  2. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  3. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  4. The Changing Role of Sound-Symbolism for Small Versus Large Vocabularies.

    Science.gov (United States)

    Brand, James; Monaghan, Padraic; Walker, Peter

    2017-12-12

    Natural language contains many examples of sound-symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound-symbolism for word learning as vocabulary size varies. Participants learned form-meaning mappings for words which were either congruent or incongruent with regard to sound-symbolic relations. For the smaller vocabulary, sound-symbolism facilitated learning individual words, whereas for larger vocabularies sound-symbolism supported learning category distinctions. The changing properties of form-meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  5. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  6. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  7. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  8. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  9. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  10. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    Directory of Open Access Journals (Sweden)

    Olympia eColizoli

    2013-10-01

    Full Text Available Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of nonlinguistic sounds induce the experience of taste, smell and physical sensations for SC. SC’s lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory, taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC’s performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC’s synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (tasty vs. tasteless words. Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC’s synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC’s synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli could be differentiated based on patterns of brain activity.

  11. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  12. Development of sound absorption measuring system with acoustic chamber; Kogata kyuon koka sokutei sochi no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Takahira, M.; Noba, M. [Toyota Motor Corp., Aichi (Japan); Matsuoka, H. [Nippon Soken, Inc., Tokyo (Japan)

    1998-05-01

    In order to measure sound absorption performance necessary to develop sound absorption materials, development was made on a device consisting of a small sound box capable of measurement inexpensively and easily, as a measure against the reverberation chamber method. In order to obtain stabilized diffusion sound internally, the sound box has a shape of asymmetric seven-side body in which sides do not face squarely with each other. The box was so sized that a large number of resonant vibration postures can be constituted at the targeted frequency simultaneously in the box. The box has a commercially available cone speaker with good acoustic output characteristics in frequency range of higher than 500 Hz installed on an inner side of the box. The sound source uses a method to derive sound absorption rate from difference of sound pressure levels. In order to eliminate need of averaging treatment by using a multi-point measurement inside the box, a discussion was given to provide an opening on part of the box to place the sound receiving point outside the opening. A square test piece is placed on the floor 0.5 meter or more away from the speaker in the box. As a result of the experiment, it was verified that the sound absorption rate obtained by this device corresponds well with that by the reverberation chamber method. The size of the test piece was also found adequate. 2 refs., 11 figs., 1 tab.

  13. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  14. Arctic Ocean Model Intercomparison Using Sound Speed

    Science.gov (United States)

    Dukhovskoy, D. S.; Johnson, M. A.

    2002-05-01

    The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.

  15. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  16. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  17. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  18. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  19. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  20. Chaotic dynamics of respiratory sounds

    International Nuclear Information System (INIS)

    Ahlstrom, C.; Johansson, A.; Hult, P.; Ask, P.

    2006-01-01

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D 2 ) and the Kaplan-Yorke dimension (D KY ) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data

  1. Chaotic dynamics of respiratory sounds

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, C. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden) and Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)]. E-mail: christer@imt.liu.se; Johansson, A. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Hult, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden); Ask, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)

    2006-09-15

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D {sub 2}) and the Kaplan-Yorke dimension (D {sub KY}) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data.

  2. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  4. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  5. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  6. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  7. Sound absorption of low-temperature reusable surface insulation candidate materials

    Science.gov (United States)

    Johnston, J. D.

    1974-01-01

    Sound absorption data from tests of four candidate low-temperature reusable surface insulation materials are presented. Limitations on the use of the data are discussed, conclusions concerning the effective absorption of the materials are drawn, and the relative significance to Vibration and Acoustic Test Facility test planning of the absorption of each material is assessed.

  8. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  9. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  10. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  11. Reading drift in flow rate sensors caused by steady sound waves

    International Nuclear Information System (INIS)

    Maximiano, Celso; Nieble, Marcio D.; Migliavacca, Sylvana C.P.; Silva, Eduardo R.F.

    1995-01-01

    The use of thermal sensors very common for the measurement of small flows of gases. In this kind of sensor a little tube forming a bypass is heated symmetrically, then the temperature distribution in the tube modifies with the mass flow along it. When a stationary wave appears in the principal tube it causes an oscillation of pressure around the average value. The sensor, located between two points of the principal tube, indicates not only the principal mass flow, but also that one caused by the difference of pressure induced by the sound wave. When the gas flows at low pressures the equipment indicates a value that do not correspond to the real. Tests and essays were realized by generating a sound wave in the principal tube, without mass flow, and the sensor detected flux. In order to solve this problem a wave-damper was constructed, installed and tested in the system and it worked satisfactory eliminating with efficiency the sound wave. (author). 2 refs., 3 figs

  12. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  13. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  14. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  15. Digital Sound Encryption with Logistic Map and Number Theoretic Transform

    Science.gov (United States)

    Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT

    2018-03-01

    Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.

  16. Numerical design and testing of a sound source for secondary calibration of microphones using the Boundary Element Method

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Juhl, Peter Møller; Barrera Figueroa, Salvador

    2009-01-01

    Secondary calibration of microphones in free field is performed by placing the microphone under calibration in an anechoic chamber with a sound source, and exposing it to a controlled sound field. A calibrated microphone is also measured as a reference. While the two measurements are usually made...... apart to avoid acoustic interaction. As a part of the project Euromet-792, aiming to investigate and improve methods for secondary free-field calibration of microphones, a sound source suitable for simultaneous secondary free-field calibration has been designed using the Boundary Element Method...... of the Danish Fundamental Metrology Institute (DFM). The design and verification of the source are presented in this communication....

  17. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  18. Active Noise Control Experiments using Sound Energy Flu

    Science.gov (United States)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  19. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  20. Prototype electronic stethoscope vs. conventional stethoscope for auscultation of heart sounds.

    Science.gov (United States)

    Kelmenson, Daniel A; Heath, Janae K; Ball, Stephanie A; Kaafarani, Haytham M A; Baker, Elisabeth M; Yeh, Daniel D; Bittner, Edward A; Eikermann, Matthias; Lee, Jarone

    2014-08-01

    In an effort to decrease the spread of hospital-acquired infections, many hospitals currently use disposable plastic stethoscopes in patient rooms. As an alternative, this study examines a prototype electronic stethoscope that does not break the isolation barrier between clinician and patient and may also improve the diagnostic accuracy of the stethoscope exam. This study aimed to investigate whether the new prototype electronic stethoscope improved auscultation of heart sounds compared to the standard conventional isolation stethoscope. In a controlled, non-blinded, cross-over study, clinicians were randomized to identify heart sounds with both the prototype electronic stethoscope and a conventional stethoscope. The primary outcome was the score on a 10-question heart sound identification test. In total, 41 clinicians completed the study. Subjects performed significantly better in the identification of heart sounds when using the prototype electronic stethoscope (median = 9 [7-10] vs. 8 [6-9] points, p value prototype electronic stethoscope. Clinicians using a new prototype electronic stethoscope achieved greater accuracy in identification of heart sounds and also universally favoured the new device, compared to the conventional stethoscope.

  1. Digital sound de-localisation as a game mechanic for novel bodily play

    DEFF Research Database (Denmark)

    Tiab, John; Rantakari, Juho; Halse, Mads Laurberg

    2016-01-01

    This paper describes an exertion gameplay mechanic involving player's partial control of their opponent's sound localization abilities. We developed this concept through designing and testing "The Boy and The Wolf" game. In this game, we combined deprivation of sight with a positional disparity...... between player bodily movement and sound. This facilitated intense gameplay supporting player creativity and spectator engagement. We use our observations and analysis of our game to offer a set of lessons learnt for designing engaging bodily play using disparity between sound and movement. Moreover, we...... describe our intended future explorations of this area....

  2. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  3. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weissgerber, Tobias; Kerber, Stefan; Fastl, Hugo; Hellbrück, Jürgen

    2012-01-01

    Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f(mod) background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.

  4. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  5. Color improves “visual” acuity via sound

    OpenAIRE

    Levy-Tzedek, Shelly; Riemer, Dar; Amedi, Amir

    2014-01-01

    Visual-to-auditory sensory substitution devices (SSDs) convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location, and color information into musical notes. We tested the “visual” acuity of 23 individuals (13 blind and 10 blindfolded sighted) on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of...

  6. Brain responses to sound intensity changes dissociate depressed participants and healthy controls.

    Science.gov (United States)

    Ruohonen, Elisa M; Astikainen, Piia

    2017-07-01

    Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  8. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  9. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  10. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  11. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  12. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  13. Sounding the Alert: Designing an Effective Voice for Earthquake Early Warning

    Science.gov (United States)

    Burkett, E. R.; Given, D. D.

    2015-12-01

    The USGS is working with partners to develop the ShakeAlert Earthquake Early Warning (EEW) system (http://pubs.usgs.gov/fs/2014/3083/) to protect life and property along the U.S. West Coast, where the highest national seismic hazard is concentrated. EEW sends an alert that shaking from an earthquake is on its way (in seconds to tens of seconds) to allow recipients or automated systems to take appropriate actions at their location to protect themselves and/or sensitive equipment. ShakeAlert is transitioning toward a production prototype phase in which test users might begin testing applications of the technology. While a subset of uses will be automated (e.g., opening fire house doors), other applications will alert individuals by radio or cellphone notifications and require behavioral decisions to protect themselves (e.g., "Drop, Cover, Hold On"). The project needs to select and move forward with a consistent alert sound to be widely and quickly recognized as an earthquake alert. In this study we combine EEW science and capabilities with an understanding of human behavior from the social and psychological sciences to provide insight toward the design of effective sounds to help best motivate proper action by alert recipients. We present a review of existing research and literature, compiled as considerations and recommendations for alert sound characteristics optimized for EEW. We do not yet address wording of an audible message about the earthquake (e.g., intensity and timing until arrival of shaking or possible actions), although it will be a future component to accompany the sound. We consider pitch(es), loudness, rhythm, tempo, duration, and harmony. Important behavioral responses to sound to take into account include that people respond to discordant sounds with anxiety, can be calmed by harmony and softness, and are innately alerted by loud and abrupt sounds, although levels high enough to be auditory stressors can negatively impact human judgment.

  14. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  15. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  16. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  17. Spatial avoidance to experimental increase of intermittent and continuous sound in two captive harbour porpoises.

    Science.gov (United States)

    Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans

    2018-02-01

    The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Analysis of the HVAC system's sound quality using the design of experiments

    International Nuclear Information System (INIS)

    Park, Sang Gil; Sim, Hyun Jin; Yoon, Ji Hyun; Jeong, Jae Eun; Choi, Byoung Jae; Oh, Jae Eung

    2009-01-01

    Human hearing is very sensitive to sound, so a subjective index of sound quality is required. Each situation of sound evaluation is composed of Sound Quality (SQ) metrics. When substituting the level of one frequency band, we could not see the tendency of substitution at the whole frequency band during SQ evaluation. In this study, the Design of Experiments (DOE) is used to analyze noise from an automotive Heating, Ventilating, and Air Conditioning (HVAC) system. The frequency domain is divided into 12 equal parts, and each level of the domain is given an increase or decrease due to the change in frequency band based on the 'loud' and 'sharp' sound of the SQ analyzed. By using DOE, the number of tests is effectively reduced by the number of experiments, and the main result is a solution at each band. SQ in terms of the 'loud' and 'sharp' sound at each band, the change in band (increase or decrease in sound pressure) or no change in band will have the most effect on the identifiable characteristics of SQ. This will enable us to select the objective frequency band. Through the results obtained, the physical level changes in arbitrary frequency domain sensitivity can be determined

  19. Phylogenetic review of tonal sound production in whales in relation to sociality

    Directory of Open Access Journals (Sweden)

    Agnarsson Ingi

    2007-08-01

    Full Text Available Abstract Background It is widely held that in toothed whales, high frequency tonal sounds called 'whistles' evolved in association with 'sociality' because in delphinids they are used in a social context. Recently, whistles were hypothesized to be an evolutionary innovation of social dolphins (the 'dolphin hypothesis'. However, both 'whistles' and 'sociality' are broad concepts each representing a conglomerate of characters. Many non-delphinids, whether solitary or social, produce tonal sounds that share most of the acoustic characteristics of delphinid whistles. Furthermore, hypotheses of character correlation are best tested in a phylogenetic context, which has hitherto not been done. Here we summarize data from over 300 studies on cetacean tonal sounds and social structure and phylogenetically test existing hypotheses on their co-evolution. Results Whistles are 'complex' tonal sounds of toothed whales that demark a more inclusive clade than the social dolphins. Whistles are also used by some riverine species that live in simple societies, and have been lost twice within the social delphinoids, all observations that are inconsistent with the dolphin hypothesis as stated. However, cetacean tonal sounds and sociality are intertwined: (1 increased tonal sound modulation significantly correlates with group size and social structure; (2 changes in tonal sound complexity are significantly concentrated on social branches. Also, duration and minimum frequency correlate as do group size and mean minimum frequency. Conclusion Studying the evolutionary correlation of broad concepts, rather than that of their component characters, is fraught with difficulty, while limits of available data restrict the detail in which component character correlations can be analyzed in this case. Our results support the hypothesis that sociality influences the evolution of tonal sound complexity. The level of social and whistle complexity are correlated, suggesting that

  20. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  1. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  2. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  3. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  4. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Directory of Open Access Journals (Sweden)

    Arash Aryani

    Full Text Available Most language users agree that some words sound harsh (e.g. grotesque whereas others sound soft and pleasing (e.g. lagoon. While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning and words' sound (i.e. affective sound; both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant and arousal (ranging from calm to excited. We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1. In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a, and through acoustic models that we implemented based on pseudoword materials (study2b. In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss' feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and

  5. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Science.gov (United States)

    Aryani, Arash; Conrad, Markus; Schmidtke, David; Jacobs, Arthur

    2018-01-01

    Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they

  6. 101-SY waste sample speed of sound/rheology testing for sonic probe program

    International Nuclear Information System (INIS)

    Cannon, N.S.

    1994-01-01

    One problem faced in the clean-up operation at Hanford is that a number of radioactive waste storage tanks are experiencing a periodic buildup and release of potentially explosive gases. The best known example is Tank 241-SY-101 (commonly referred to as 101-SY) in which hydrogen gas periodically built up within the waste to the point that increased buoyancy caused a roll-over event, in which the gas was suddenly released in potentially explosive concentrations (if an ignition source were present). The sonic probe concept is to generate acoustic vibrations in the 101-SY tank waste at nominally 100 Hz, with sufficient amplitude to cause the controlled release of hydrogen bubbles trapped in the waste. The sonic probe may provide a potentially cost-effective alternative to large mixer pumps now used for hydrogen mitigation purposes. Two important parameters needed to determine sonic probe effectiveness and design are the speed of sound and yield stress of the tank waste. Tests to determine these parameters in a 240 ml sample of 101-SY waste (obtained near the tank bottom) were performed, and the results are reported

  7. Improving the hospital 'soundscape': a framework to measure individual perceptual response to hospital sounds.

    Science.gov (United States)

    Mackrill, J B; Jennings, P A; Cain, R

    2013-01-01

    Work on the perception of urban soundscapes has generated a number of perceptual models which are proposed as tools to test and evaluate soundscape interventions. However, despite the excessive sound levels and noise within hospital environments, perceptual models have not been developed for these spaces. To address this, a two-stage approach was developed by the authors to create such a model. First, semantics were obtained from listening evaluations which captured the feelings of individuals from hearing hospital sounds. Then, 30 participants rated a range of sound clips representative of a ward soundscape based on these semantics. Principal component analysis extracted a two-dimensional space representing an emotional-cognitive response. The framework enables soundscape interventions to be tested which may improve the perception of these hospital environments.

  8. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  9. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  10. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  11. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  12. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  13. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  14. Sound field simulation and acoustic animation in urban squares

    Science.gov (United States)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  15. Virtual nature environment with nature sound exposure induce stress recovery by enhanced parasympathetic activity

    DEFF Research Database (Denmark)

    Annerstedt, Matilda; Jönsson, Peter; Wallergård, Mattias

    2013-01-01

    . The group that recovered in virtual nature without sound and the control group displayed no particular autonomic activation or deactivation. The results demonstrate a potential mechanistic link between nature, the sounds of nature, and stress recovery, and suggest the potential importance of virtual reality......Experimental research on stress recovery in natural environments is limited, as is study of the effect of sounds of nature. After inducing stress by means of a virtual stress test, we explored physiological recovery in two different virtual natural environments (with and without exposure to sounds...... of nature) and in one control condition. Cardiovascular data and saliva cortisol were collected. Repeated ANOVA measurements indicated parasympathetic activation in the group subjected to sounds of nature in a virtual natural environment, suggesting enhanced stress recovery may occur in such surroundings...

  16. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  17. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  18. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  19. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  20. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  1. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  2. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  3. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  4. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  5. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    Science.gov (United States)

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were

  6. Combined multibeam and bathymetry data from Rhode Island Sound and Block Island Sound: a regional perspective

    Science.gov (United States)

    Poppe, Lawrence J.; McMullen, Katherine Y.; Danforth, William W.; Blankenship, Mark R.; Clos, Andrew R.; Glomb, Kimberly A.; Lewit, Peter G.; Nadeau, Megan A.; Wood, Douglas A.; Parker, Castleton E.

    2014-01-01

    Detailed bathymetric maps of the sea floor in Rhode Island and Block Island Sounds are of great interest to the New York, Rhode Island, and Massachusetts research and management communities because of this area's ecological, recreational, and commercial importance. Geologically interpreted digital terrain models from individual surveys provide important benthic environmental information, yet many applications of this information require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 14 contiguous multibeam bathymetric datasets that were produced by the National Oceanic and Atmospheric Administration during charting operations into one digital terrain model that covers much of Block Island Sound and extends eastward across Rhode Island Sound. The new dataset, which covers over 1244 square kilometers, is adjusted to mean lower low water, gridded to 4-meter resolution, and provided in Universal Transverse Mercator Zone 19, North American Datum of 1983 and geographic World Geodetic Survey of 1984 projections. This resolution is adequate for sea-floor feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the data include boulder lag deposits of winnowed Pleistocene strata, sand-wave fields, and scour depressions that reflect the strength of oscillating tidal currents and scour by storm-induced waves. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic features visible in the data include shipwrecks and dredged channels. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for

  7. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  8. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  9. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  10. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  11. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  12. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  13. Acoustically sticky topographic metasurfaces for underwater sound absorption.

    Science.gov (United States)

    Lee, Hunki; Jung, Myungki; Kim, Minsoo; Shin, Ryung; Kang, Shinill; Ohm, Won-Suk; Kim, Yong Tae

    2018-03-01

    A class of metasurfaces for underwater sound absorption, based on a design principle that maximizes thermoviscous loss, is presented. When a sound meets a solid surface, it leaves a footprint in the form of thermoviscous boundary layers in which energy loss takes place. Considered to be a nuisance, this acoustic to vorticity/entropy mode conversion and the subsequent loss are often ignored in the existing designs of acoustic metamaterials and metasurfaces. The metasurface created is made of a series of topographic meta-atoms, i.e., intaglios and reliefs engraved directly on the solid object to be concealed. The metasurface is acoustically sticky in that it rather facilitates the conversion of the incident sound to vorticity and entropy modes, hence the thermoviscous loss, leading to the desired anechoic property. A prototype metasurface machined on a brass object is tested for its anechoicity, and shows a multitude of absorption peaks as large as unity in the 2-5 MHz range. Computations also indicate that a topographic metasurface is robust to hydrostatic pressure variation, a quality much sought-after in underwater applications.

  14. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  15. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  16. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  17. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  18. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  19. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  20. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  1. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  2. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  3. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  4. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  5. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  6. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  7. EXTRACTION OF SPATIAL PARAMETERS FROM CLASSIFIED LIDAR DATA AND AERIAL PHOTOGRAPH FOR SOUND MODELING

    Directory of Open Access Journals (Sweden)

    S. Biswas

    2012-07-01

    Full Text Available Prediction of outdoor sound levels in 3D space is important for noise management, soundscaping etc. Sound levels at outdoor can be predicted using sound propagation models which need terrain parameters. The existing practices of incorporating terrain parameters into models are often limited due to inadequate data or inability to determine accurate sound transmission paths through a terrain. This leads to poor accuracy in modelling. LIDAR data and Aerial Photograph (or Satellite Images provide opportunity to incorporate high resolution data into sound models. To realize this, identification of building and other objects and their use for extraction of terrain parameters are fundamental. However, development of a suitable technique, to incorporate terrain parameters from classified LIDAR data and Aerial Photograph, for sound modelling is a challenge. Determination of terrain parameters along various transmission paths of sound from sound source to a receiver becomes very complex in an urban environment due to the presence of varied and complex urban features. This paper presents a technique to identify the principal paths through which sound transmits from source to receiver. Further, the identified principal paths are incorporated inside the sound model for sound prediction. Techniques based on plane cutting and line tracing are developed for determining principal paths and terrain parameters, which use various information, e.g., building corner and edges, triangulated ground, tree points and locations of source and receiver. The techniques developed are validated through a field experiment. Finally efficacy of the proposed technique is demonstrated by developing a noise map for a test site.

  8. Parallel-plate third sound waveguides with fixed and variable plate spacings for the study of fifth sound in superfluid helium

    International Nuclear Information System (INIS)

    Jelatis, G.J.

    1983-01-01

    Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance

  9. Taiwanese Middle School Students' Materialistic Concepts of Sound

    Science.gov (United States)

    Eshach, Haim; Lin, Tzu-Chiang; Tsai, Chin-Chung

    2016-01-01

    This study investigated if and to what extent grade 8 and 9 students in Taiwan attributed materialistic properties to sound concepts, and whether they hold scientific views in parallel with materialistic views. Taiwanese middle school students are a special population since their scores in international academic comparison tests such as TIMSS and…

  10. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  11. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  12. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  13. Investigations of mode I crack propagation in fibre-reinforced plastics with real time X-ray tests and simultaneous sound emission analysis

    International Nuclear Information System (INIS)

    Brunner, A.; Nordstrom, R.; Flueeler, P.

    1992-01-01

    The described investigation of crack formation and crack propagation in mode I (tensile stress) in fibre-reinforced plastic samples, especially uni-directional carbon fibre reinforced polyether-ether ketone (PEEK) has several aims. On the one hand, the phenomena of crack formation and crack propagation in these materials are to be studied, and on the other hand, the draft standards for these tests are to be checked. It was found that the combination of real time X-ray tests and simultaneous sound emission analysis is excellently suited for the basic examination of crack formation and crack propagation in DCB samples. With the aid of picture processing and analysis of the video representation, consistent crack lengths and resulting G IC values can be determined. (orig./RHM) [de

  14. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  15. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  16. Device for precision measurement of speed of sound in a gas

    Science.gov (United States)

    Kelner, Eric; Minachi, Ali; Owen, Thomas E.; Burzynski, Jr., Marion; Petullo, Steven P.

    2004-11-30

    A sensor for measuring the speed of sound in a gas. The sensor has a helical coil, through which the gas flows before entering an inner chamber. Flow through the coil brings the gas into thermal equilibrium with the test chamber body. After the gas enters the chamber, a transducer produces an ultrasonic pulse, which is reflected from each of two faces of a target. The time difference between the two reflected signals is used to determine the speed of sound in the gas.

  17. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  18. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  19. 21 CFR 876.4590 - Interlocking urethral sound.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...

  20. Effects of task-switching on neural representations of ambiguous sound input.

    Science.gov (United States)

    Sussman, Elyse S; Bregman, Albert S; Lee, Wei-Wei

    2014-11-01

    The ability to perceive discrete sound streams in the presence of competing sound sources relies on multiple mechanisms that organize the mixture of the auditory input entering the ears. Many studies have focused on mechanisms that contribute to integrating sounds that belong together into one perceptual stream (integration) and segregating those that come from different sound sources (segregation). However, little is known about mechanisms that allow us to perceive individual sound sources within a dynamically changing auditory scene, when the input may be ambiguous, and heard as either integrated or segregated. This study tested the question of whether focusing on one of two possible sound organizations suppressed representation of the alternative organization. We presented listeners with ambiguous input and cued them to switch between tasks that used either the integrated or the segregated percept. Electrophysiological measures indicated which organization was currently maintained in memory. If mutual exclusivity at the neural level was the rule, attention to one of two possible organizations would preclude neural representation of the other. However, significant MMNs were elicited to both the target organization and the unattended, alternative organization, along with the target-related P3b component elicited only to the designated target organization. Results thus indicate that both organizations (integrated and segregated) were simultaneously maintained in memory regardless of which task was performed. Focusing attention to one aspect of the sounds did not abolish the alternative, unattended organization when the stimulus input was ambiguous. In noisy environments, such as walking on a city street, rapid and flexible adaptive processes are needed to help facilitate rapid switching to different sound sources in the environment. Having multiple representations available to the attentive system would allow for such flexibility, needed in everyday situations to

  1. Application of porous material to reduce aerodynamic sound from bluff bodies

    International Nuclear Information System (INIS)

    Sueki, Takeshi; Takaishi, Takehisa; Ikeda, Mitsuru; Arai, Norio

    2010-01-01

    Aerodynamic sound derived from bluff bodies can be considerably reduced by flow control. In this paper, the authors propose a new method in which porous material covers a body surface as one of the flow control methods. From wind tunnel tests on flows around a bare cylinder and a cylinder with porous material, it has been clarified that the application of porous materials is effective in reducing aerodynamic sound. Correlation between aerodynamic sound and aerodynamic force fluctuation, and a surface pressure distribution of cylinders are measured to investigate a mechanism of aerodynamic sound reduction. As a result, the correlation between aerodynamic sound and aerodynamic force fluctuation exists in the flow around the bare cylinder and disappears in the flow around the cylinder with porous material. Moreover, the aerodynamic force fluctuation of the cylinder with porous material is less than that of the bare cylinder. The surface pressure distribution of the cylinder with porous material is quite different from that of the bare cylinder. These facts indicate that aerodynamic sound is reduced by suppressing the motion of vortices because aerodynamic sound is induced by the unstable motion of vortices. In addition, an instantaneous flow field in the wake of the cylinder is measured by application of the PIV technique. Vortices that are shed alternately from the bare cylinder disappear by application of porous material, and the region of zero velocity spreads widely behind the cylinder with porous material. Shear layers between the stationary region and the uniform flow become thin and stable. These results suggest that porous material mainly affects the flow field adjacent to bluff bodies and reduces aerodynamic sound by depriving momentum of the wake and suppressing the unsteady motion of vortices. (invited paper)

  2. Characterization of sound emitted by wind machines used for frost control

    Energy Technology Data Exchange (ETDEWEB)

    Gambino, V.; Gambino, T. [Aercoustics Engineering Ltd., Toronto, ON (Canada); Fraser, H.W. [Ontario Ministry of Agriculture, Food and Rural Affairs, Vineland, ON (Canada)

    2007-07-01

    Wind machines are used in Niagara-on-the-Lake to protect cold-sensitive crops against cold injury during winter's extreme cold temperatures,spring's late frosts and autumn's early frosts. The number of wind machines in Ontario has about doubled annually from only a few in the late 1990's, to more than 425 in 2006. They are not used for generating power. Noise complaints have multiplied as the number of wind machines has increased. The objective of this study was to characterize the sound produced by wind machines; learn why residents are annoyed by wind machine noise; and suggest ways to possibly reduce sound emissions. One part of the study explored acoustic emission characteristics, the sonic differences of units made by different manufacturers, sound propagation properties under typical use atmospheric conditions and low frequency noise impact potential. Tests were conducted with a calibrated Larson Davis 2900B portable spectrum analyzer. Sound was measured with a microphone whose frequency response covered the range 4 Hz to 20 kHz. The study examined and found several unique acoustic properties that are characteristic of wind machines. It was determined that noise from wind machines is due to both aerodynamic and mechanical effects, but aerodynamic sounds were found to be the most significant. It was concluded that full range or broadband sounds manifest themselves as noise components that extend throughout the audible frequency range from the bladepass frequency to upwards of 1000 Hz. The sound spectrum of a wind machine is full natural tones and impulses that give it a readily identifiable acoustic character. Atmospheric conditions including temperature, lapse rate, relative humidity, mild winds, gradients and atmospheric turbulence all play a significant role in the long range outdoor propagation of sound from wind machines. 6 refs., 6 figs.

  3. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  4. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  5. Deflection of resilient materials for reduction of floor impact sound.

    Science.gov (United States)

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.

  6. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  7. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  8. Evaluation of Sound Quality, Boominess and Boxiness in Small Rooms

    DEFF Research Database (Denmark)

    Weisser, Adam; Rindel, Jens Holger

    2006-01-01

    ratings. The classical bass ratio definitions showed poor correlation with all subjective ratings. The overall sound quality ratings gave different results for speech and music. For speech the preferred mean RT should be as low as possible, whereas for music there was found a preferred range between 0......The acoustics of small rooms has been studied with emphasis on sound quality, boominess and boxiness when the rooms are used for speech or music. Seven rooms with very different characteristics have been used for the study. Subjective listening tests were made using binaural recordings...... of reproduced speech and music. The test results were compared with a large number of objective acoustic parameters based on the frequency-dependent reverberation times measured in the rooms. This has led to the proposal of three new acoustic parameters, which have shown high correlation with the subjective...

  9. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  10. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes......, as well as overall preference, was based on consistency tests of binary paired-comparison judgments and on modeling the choice frequencies using probabilistic choice models. As a result, the preferences of non-expert listeners could be measured reliably at a ratio scale level. Principal components derived...

  11. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Development of the Hawk/Nike Hawk sounding rocket vehicles

    Science.gov (United States)

    Flowers, B. J.

    1976-01-01

    A new sounding rocket family, the Hawk and Nike-Hawk Vehicles, have been developed, flight tested and added to the NASA Sounding Rocket Vehicle Stable. The Hawk is a single-stage vehicle that will carry 35.6 cm diameter payloads weighing 45.5 kg to 91 kg to altitudes of 78 km to 56 km, respectively. The two-stage Nike-Hawk will carry payloads weighing 68 kg to 136 kg to altitudes of 118 km to 113 km, respectively. Both vehicles utilize the XM22E8 Hawk rocket motor which is available in large numbers as a surplus item from the U.S. Army. The Hawk fin and tail can hardware were designed in-house. The Nike tail can and fin hardware are surplus Nike-Ajax booster hardware. Development objectives were to provide a vehicle family with a larger diameter, larger volume payload capability than the Nike-Apache and Nike-Tomahawk vehicles at comparable cost. Both vehicles performed nominally in flight tests.

  13. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  14. A study on the sound quality evaluation model of mechanical air-cleaners

    DEFF Research Database (Denmark)

    Ih, Jeong-Guon; Jang, Su-Won; Jeong, Cheol-Ho

    2009-01-01

    In operating the air-cleaner for a long time, people in a quiet enclosed space expect low sound at low operational levels for a routine cleaning of air. However, in the condition of high operational levels of the cleaner, a powerful yet nonannoying sound is desired, which is connected to a feeling...... of an immediate cleaning of pollutants. In this context, it is important to evaluate and design the air-cleaner noise to satisfy such contradictory expectations from the customers. In this study, a model for evaluating the sound quality of air-cleaners of mechanical type was developed based on objective...... and subjective analyses. Sound signals from various aircleaners were recorded and they were edited by increasing or decreasing the loudness at three wide specific-loudness bands: 20-400 Hz (0-3.8 barks), 400-1250 Hz (3.8-10 barks), and 1.25- 12.5 kHz bands (10-22.8 barks). Subjective tests using the edited...

  15. Experimental investigation of sound absorption properties of perforated date palm fibers panel

    International Nuclear Information System (INIS)

    Elwaleed, A K; Nikabdullah, N; Nor, M J M; Tahir, M F M; Zulkifli, R

    2013-01-01

    This paper presents the sound absorption properties of a natural waste of date palm fiber perforated panel. A single layer of the date palm fibers was tested in this study for its sound absorption properties. The experimental measurements were carried out using impedance tube at the acoustic lab, Faculty of Engineering, Universiti Kebangsaan Malaysia. The experiment was conducted for the panel without air gap, with air gap and with perforated plate facing. Three air gap thicknesses of 10 mm, 20 mm and 30 mm were used between the date palm fiber sample and the rigid backing of the impedance tube. The results showed that when facing the palm date fiber sample with perforated plate the sound absorption coefficient improved at the higher and lower frequency ranges. This increase in sound absorption coincided with reduction in medium frequency absorption. However, this could be improved by using different densities or perforated plate with the date palm fiber panel.

  16. The sound field of a rotating dipole in a plug flow.

    Science.gov (United States)

    Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H

    2018-04-01

    An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.

  17. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  18. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  19. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    Science.gov (United States)

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  1. Sounds in one-dimensional superfluid helium

    International Nuclear Information System (INIS)

    Um, C.I.; Kahng, W.H.; Whang, E.H.; Hong, S.K.; Oh, H.G.; George, T.F.

    1989-01-01

    The temperature variations of first-, second-, and third-sound velocity and attenuation coefficients in one-dimensional superfluid helium are evaluated explicitly for very low temperatures and frequencies (ω/sub s/tau 2 , and the ratio of second sound to first sound becomes unity as the temperature decreases to absolute zero

  2. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  3. Evaluation of a Loudspeaker-Based Virtual Acoustic Environment for Investigating sound-field auditory steady-state responses

    DEFF Research Database (Denmark)

    Zapata-Rodriguez, Valentina; Marbjerg, Gerd Høy; Brunskog, Jonas

    2017-01-01

    Measuring sound-field auditory steady-state responses (ASSR) is a promising new objective clinical procedure for hearing aid fitting validation, particularly for infants who cannot respond to behavioral tests. In practice, room acoustics of non-anechoic test rooms can heavily influence the audito...... tool PARISM (Phased Acoustical Radiosity and Image Source Method) and validated through measurements. This study discusses the limitations of the system and the potential improvements needed for a more realistic sound-field ASSR simulation....

  4. P-sounder: an airborne P-band ice sounding radar

    DEFF Research Database (Denmark)

    Dall, Jørgen; Skou, Niels; Kusk, Anders

    2007-01-01

    is to test new ice sounding techniques, e.g. polarimetry, synthetic aperture processing, and coherent clutter suppression. A system analysis involving ice scattering models confirms that it is feasible to detect the bedrock through 4 km of ice and to detect deep ice layers. The ice sounder design features...

  5. Design of an airborne P-band ice sounding radar

    DEFF Research Database (Denmark)

    Dall, Jørgen; Skou, Niels; Kusk, Anders

    2006-01-01

    is to test new ice sounding techniques, e.g. polarimetry, synthetic aperture processing, and coherent clutter suppression. A system analysis involving ice scattering models confirms that it is feasible to detect the bedrock through 4 km of ice and to detect deep ice layers. The ice sounder design features...

  6. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  7. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  8. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  9. The irrelevant sound phenomenon revisited: what role for working memory capacity?

    Science.gov (United States)

    Beaman, C Philip

    2004-09-01

    High-span individuals (as measured by the operation span [OSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M. F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect. ((c) 2004 APA, all rights reserved)

  10. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  11. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  12. A efetividade dos testes complementares no acompanhamento da intervenção terapêutica no transtorno fonológico Effectiveness of complementary tests in monitoring therapeutic intervention in speech sound disorders

    Directory of Open Access Journals (Sweden)

    Haydée Fiszbein Wertzner

    2012-12-01

    Full Text Available O planejamento e a evolução terapêutica de crianças com transtorno fonológico estão diretamente relacionados à avaliação inicial e aos testes complementares aplicados. Acompanhar a evolução do caso por meio de verificações regulares acrescenta informações importantes à avaliação diagnóstica, o que permite fortalecer achados iniciais a respeito da dificuldade subjacente identificada na avaliação inicial. Assim, no presente estudo de caso verificou-se a efetividade e a eficiência da aplicação do índice de porcentagem de consoantes corretas revisado (PCC-R bem como dos testes complementares de inconsistência de fala, de estimulabilidade e de habilidades metafonológicas no acompanhamento da intervenção terapêutica em crianças com transtorno fonológico. Participaram deste estudo três crianças do gênero masculino. Na data da avaliação inicial o Caso 1 tinha 6 anos e 9 meses de idade, o Caso 2, 8 anos e 10 meses, e o Caso 3, 9 anos e 7 meses. Além da avaliação específica da fonologia, foram aplicados testes complementares que auxiliaram na verificação da dificuldade subjacente específica em cada um dos casos. Desta forma, os sujeitos foram submetidos à avaliação de habilidades metafonológicas, à prova de inconsistência de fala e de estimulabilidade. A análise conjunta dos dados permitiu constatar que os testes selecionados foram efetivos e eficientes tanto para complementar o diagnóstico como para indicar mudanças nos três casos de crianças com transtorno fonológico.Therapeutic planning and evolution of children with speech sound disorders are related to both the initial assessment and to the complementary tests selected to be applied. Monitoring the case by regular evaluations adds important information to the diagnosis, which allows strengthening the initial findings with regards to the underlying deficits identified in the initial evaluation. The aim of this case report was to verify the

  13. Aspirating and Nonaspirating Swallow Sounds in Children: A Pilot Study.

    Science.gov (United States)

    Frakking, Thuy; Chang, Anne; O'Grady, Kerry; David, Michael; Weir, Kelly

    2016-12-01

    Cervical auscultation (CA) may be used to complement feeding/swallowing evaluations when assessing for aspiration. There are no published pediatric studies that compare the properties of sounds between aspirating and nonaspirating swallows. To establish acoustic and perceptual profiles of aspirating and nonaspirating swallow sounds and determine if a difference exists between these 2 swallowing types. Aspiration sound clips were obtained from recordings using CA simultaneously undertaken with videofluoroscopic swallow study. Aspiration was determined using the Penetration-Aspiration Scale. The presence of perceptual swallow/breath parameters was rated by 2 speech pathologists who were blinded to the type of swallow. Acoustic data between groups were compared using Mann Whitney U-tests, while perceptual differences were determined by a test of 2 proportions. Combinations of perceptual parameters of 50 swallows (27 aspiration, 23 no aspiration) from 47 children (57% male) were statistically analyzed using area under a receiver operating characteristic (aROC), sensitivity, specificity, and positive and negative predictive values to determine predictors of aspirating swallows. The combination of post-swallow presence of wet breathing and wheeze and absence of GRS and normal breathing was the best predictor of aspiration (aROC = 0.82, 95% CI, 0.70-0.94). There were no significant differences between these 2 swallow types for peak frequency, duration, and peak amplitude. Our pilot study has shown that certain characteristics of swallow obtained using CA may be useful in the prediction of aspiration. However, further research comparing the acoustic swallowing sound profiles of normal children to children with dysphagia (who are aspirating) on a larger scale is required. © The Author(s) 2016.

  14. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  15. WODA Technical Guidance on Underwater Sound from Dredging.

    Science.gov (United States)

    Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders

    2016-01-01

    The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.

  16. Description and Flight Performance Results of the WASP Sounding Rocket

    Science.gov (United States)

    De Pauw, J. F.; Steffens, L. E.; Yuska, J. A.

    1968-01-01

    A general description of the design and construction of the WASP sounding rocket and of the performance of its first flight are presented. The purpose of the flight test was to place the 862-pound (391-kg) spacecraft above 250 000 feet (76.25 km) on free-fall trajectory for at least 6 minutes in order to study the effect of "weightlessness" on a slosh dynamics experiment. The WASP sounding rocket fulfilled its intended mission requirements. The sounding rocket approximately followed a nominal trajectory. The payload was in free fall above 250 000 feet (76.25 km) for 6.5 minutes and reached an apogee altitude of 134 nautical miles (248 km). Flight data including velocity, altitude, acceleration, roll rate, and angle of attack are discussed and compared to nominal performance calculations. The effect of residual burning of the second stage motor is analyzed. The flight vibration environment is presented and analyzed, including root mean square (RMS) and power spectral density analysis.

  17. Sound Propagation Around Off-Shore Wind Turbines. Long-Range Parabolic Equation Calculations for Baltic Sea Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, Lisa

    2003-07-01

    Low-frequency, long-range sound propagation over a sea surface has been calculated using a wide-angel Cranck-Nicholson Parabolic Equation method. The model is developed to investigate noise from off-shore wind turbines. The calculations are made using normal meteorological conditions of the Baltic Sea. Special consideration has been made to a wind phenomenon called low level jet with strong winds on rather low altitude. The effects of water waves on sound propagation have been incorporated in the ground boundary condition using a boss model. This way of including roughness in sound propagation models is valid for water wave heights that are small compared to the wave length of the sound. Nevertheless, since only low frequency sound is considered, waves up to the mean wave height of the Baltic Sea can be included in this manner. The calculation model has been tested against benchmark cases and agrees well with measurements. The calculations show that channelling of sound occurs at downwind conditions and that the sound propagation tends towards cylindrical spreading. The effects of the water waves are found to be fairly small.

  18. Real-Time Detection of Important Sounds with a Wearable Vibration Based Device for Hearing-Impaired People

    Directory of Open Access Journals (Sweden)

    Mete Yağanoğlu

    2018-04-01

    Full Text Available Hearing-impaired people do not hear indoor and outdoor environment sounds, which are important for them both at home and outside. By means of a wearable device that we have developed, a hearing-impaired person will be informed of important sounds through vibrations, thereby understanding what kind of sound it is. Our system, which operates in real time, can achieve a success rate of 98% when estimating a door bell ringing sound, 99% success identifying an alarm sound, 99% success identifying a phone ringing, 91% success identifying honking, 93% success identifying brake sounds, 96% success identifying dog sounds, 97% success identifying human voice, and 96% success identifying other sounds using the audio fingerprint method. Audio fingerprint is a brief summary of an audio file, perceptively summarizing a piece of audio content. In this study, our wearable device is tested 100 times a day for 100 days on five deaf persons and 50 persons with normal hearing whose ears were covered by earphones that provided wind sounds. This study aims to improve the quality of life of deaf persons, and provide them a more prosperous life. In the questionnaire performed, deaf people rate the clarity of the system at 90%, usefulness at 97%, and the likelihood of using this device again at 100%.

  19. Office noise: Can headphones and masking sound attenuate distraction by background speech?

    Science.gov (United States)

    Jahncke, Helena; Björkeholm, Patrik; Marsh, John E; Odelius, Johan; Sörqvist, Patrik

    2016-11-22

    Background speech is one of the most disturbing noise sources at shared workplaces in terms of both annoyance and performance-related disruption. Therefore, it is important to identify techniques that can efficiently protect performance against distraction. It is also important that the techniques are perceived as satisfactory and are subjectively evaluated as effective in their capacity to reduce distraction. The aim of the current study was to compare three methods of attenuating distraction from background speech: masking a background voice with nature sound through headphones, masking a background voice with other voices through headphones and merely wearing headphones (without masking) as a way to attenuate the background sound. Quiet was deployed as a baseline condition. Thirty students participated in an experiment employing a repeated measures design. Performance (serial short-term memory) was impaired by background speech (1 voice), but this impairment was attenuated when the speech was masked - and in particular when it was masked by nature sound. Furthermore, perceived workload was lowest in the quiet condition and significantly higher in all other sound conditions. Notably, the headphones tested as a sound-attenuating device (i.e. without masking) did not protect against the effects of background speech on performance and subjective work load. Nature sound was the only masking condition that worked as a protector of performance, at least in the context of the serial recall task. However, despite the attenuation of distraction by nature sound, perceived workload was still high - suggesting that it is difficult to find a masker that is both effective and perceived as satisfactory.

  20. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  1. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  2. A comparison of two different sound intensity measurement principles

    DEFF Research Database (Denmark)

    Jacobsen, Finn; de Bree, Hans-Elias

    2005-01-01

    , and compares the two measurement principles with particular regard to the sources of error in sound power determination. It is shown that the phase calibration of intensity probes that combine different transducers is very critical below 500 Hz if the measurement surface is very close to the source under test...

  3. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    Science.gov (United States)

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  4. A Pilot Investigation of Speech Sound Disorder Intervention Delivered by Telehealth to School-Age Children

    Directory of Open Access Journals (Sweden)

    Sue Grogan-Johnson

    2011-05-01

    Full Text Available This article describes a school-based telehealth service delivery model and reports outcomes made by school-age students with speech sound disorders in a rural Ohio school district. Speech therapy using computer-based speech sound intervention materials was provided either by live interactive videoconferencing (telehealth, or conventional side-by-side intervention.  Progress was measured using pre- and post-intervention scores on the Goldman Fristoe Test of Articulation-2 (Goldman & Fristoe, 2002. Students in both service delivery models made significant improvements in speech sound production, with students in the telehealth condition demonstrating greater mastery of their Individual Education Plan (IEP goals. Live interactive videoconferencing thus appears to be a viable method for delivering intervention for speech sound disorders to children in a rural, public school setting. Keywords:  Telehealth, telerehabilitation, videoconferencing, speech sound disorder, speech therapy, speech-language pathology; E-Helper

  5. On the sound absorption coefficient of porous asphalt pavements for oblique incident sound waves

    NARCIS (Netherlands)

    Bezemer-Krijnen, Marieke; Wijnant, Ysbrand H.; de Boer, Andries; Bekke, Dirk; Davy, J.; Don, Ch.; McMinn, T.; Dowsett, L.; Broner, N.; Burgess, M.

    2014-01-01

    A rolling tyre will radiate noise in all directions. However, conventional measurement techniques for the sound absorption of surfaces only give the absorption coefficient for normal incidence. In this paper, a measurement technique is described with which it is possible to perform in situ sound

  6. Development of the Astrobee F sounding rocket system.

    Science.gov (United States)

    Jenkins, R. B.; Taylor, J. P.; Honecker, H. J., Jr.

    1973-01-01

    The development of the Astrobee F sounding rocket vehicle through the first flight test at NASA-Wallops Station is described. Design and development of a 15 in. diameter, dual thrust, solid propellant motor demonstrating several new technology features provided the basis for the flight vehicle. The 'F' motor test program described demonstrated the following advanced propulsion technology: tandem dual grain configuration, low burning rate HTPB case-bonded propellant, and molded plastic nozzle. The resultant motor integrated into a flight vehicle was successfully flown with extensive diagnostic instrumentation.-

  7. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  8. Opo lidar sounding of trace atmospheric gases in the 3 - 4 μm spectral range

    Science.gov (United States)

    Romanovskii, Oleg A.; Sadovnikov, Sergey A.; Kharchenko, Olga V.; Yakovlev, Semen V.

    2018-04-01

    The applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO) generation to lidar sounding of the atmosphere in the spectral range 3-4 μm is studied in this work. A technique developed for lidar sounding of trace atmospheric gases (TAG) is based on differential absorption lidar (DIAL) method and differential optical absorption spectroscopy (DOAS). The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. The numerical simulation performed shows that a KTA-based OPO laser is a promising source of radiation for remote DIAL-DOAS sounding of the TAGs under study along surface tropospheric paths. A possibility of using a PD38-03-PR photodiode for the DIAL gas analysis of the atmosphere is shown.

  9. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  10. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  11. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  12. Decoding the neural signatures of emotions expressed through sound.

    Science.gov (United States)

    Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T

    2018-03-01

    Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. A randomized controlled trial on the beneficial effects of training letter-speech sound integration on reading fluency in children with dyslexia

    NARCIS (Netherlands)

    Fraga González, G.; Žarić, G.; Tijms, J.; Bonte, M.; Blomert, L.; van der Molen, M.W.

    2015-01-01

    A recent account of dyslexia assumes that a failure to develop automated letter-speech sound integration might be responsible for the observed lack of reading fluency. This study uses a pre-test-training-post-test design to evaluate the effects of a training program based on letter-speech sound

  14. The sound and the fury--bees hiss when expecting danger.

    Science.gov (United States)

    Wehmann, Henja-Niniane; Gustav, David; Kirkerud, Nicholas H; Galizia, C Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  15. Field Grow-out of Juvenile American Lobsters in Long Island Sound

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Early benthic stage American lobsters, Homarus americanus, were held in a pilot nursery system in Long Island Sound (LIS) to test field grow-out, as a step toward...

  16. Improved Sound Absorption Performance of Nonwoven Fabric using Fabric Facing and Air Back Cavity

    Directory of Open Access Journals (Sweden)

    Ismail Ahmad Yusuf

    2017-01-01

    Full Text Available This paper presents the improvement methods to increase sound absorption performance of polyethylene based nonwoven fabric (PNF. The methods are placing a woven fabric in front of the sample as well as providing air cavity behind the sample. The samples were experimentally tested in an impedance tube based on ISO 10354-2:2001 whereby two microphones are used and the transfer matrix methods are employed. From the results, it can be seen that placing front woven fabric effectively increases sound absorption performance. Moreover, introducing air cavity gap behind the sample is also found to be more significant to increase sound absorption.

  17. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  18. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  19. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  20. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  1. The effect of looming and receding sounds on the perceived in-depth orientation of depth-ambiguous biological motion figures.

    Directory of Open Access Journals (Sweden)

    Ben Schouten

    Full Text Available BACKGROUND: The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws is affected by the presentation of looming or receding sounds synchronized with the footsteps. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming, falling (receding or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding perspective cues is visually most looming becomes harder (easier when the orthographic plw is paired with looming sounds. CONCLUSIONS/SIGNIFICANCE: The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds

  2. Consort 1 sounding rocket flight

    Science.gov (United States)

    Wessling, Francis C.; Maybee, George W.

    1989-01-01

    This paper describes a payload of six experiments developed for a 7-min microgravity flight aboard a sounding rocket Consort 1, in order to investigate the effects of low gravity on certain material processes. The experiments in question were designed to test the effect of microgravity on the demixing of aqueous polymer two-phase systems, the electrodeposition process, the production of elastomer-modified epoxy resins, the foam formation process and the characteristics of foam, the material dispersion, and metal sintering. The apparatuses designed for these experiments are examined, and the rocket-payload integration and operations are discussed.

  3. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  4. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  5. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  6. Experimental and theoretical sound transmission. [reduction of interior noise in aircraft

    Science.gov (United States)

    Roskam, J.; Muirhead, V. U.; Smith, H. W.; Durenberger, D. W.

    1978-01-01

    The capabilities of the Kansas University- Flight Research Center for investigating panel sound transmission as a step toward the reduction of interior noise in general aviation aircraft were discussed. Data obtained on panels with holes, on honeycomb panels, and on various panel treatments at normal incidence were documented. The design of equipment for panel transmission loss tests at nonnormal (slanted) sound incidence was described. A comprehensive theory-based prediction method was developed and shows good agreement with experimental observations of the stiffness controlled, the region, the resonance controlled region, and the mass-law region of panel vibration.

  7. Possibilities of spatial hearing testing in occupational medicine

    Directory of Open Access Journals (Sweden)

    Tomasz Przewoźny

    2016-08-01

    Full Text Available Dysfunctions of the organ of hearing are a significant limitation in the performance of occupations that require its full efficiency (vehicle driving, army, police, fire brigades, mining. Hearing impairment is associated with poorer understanding of speech and disturbed sound localization that directly affects the worker’s orientation in space and his/her assessment of distance and location of other workers or, even most importantly, of dangerous machines. Testing sound location abilities is not a standard procedure, even in highly specialized audiological examining rooms. It should be pointed out that the ability to localize sounds which are particularly loud, is not directly associated with the condition of the hearing organ, but is rather considered an auditory function of a higher level. Disturbances in sound localization are mainly associated with structural and functional disturbances of the central nervous system and occur also in patients with normal hearing when tested with standard methods. The article presents different theories explaining the phenomenon of sound localization, such as interaural differences in time, interaural differences in sound intensity, monaural spectrum shape and the anatomical and physiological basis of these processes. It also describes methods of measurement of disturbances in sound localization which are used in Poland and around the world, also by the author of this work. The author analyzed accessible reports on sound localization testing in occupational medicine and the possibilities of using such tests in various occupations requiring full fitness of the organ of hearing.

  8. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  9. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  10. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  11. Impulsive sounds change European seabass swimming patterns: Influence of pulse repetition interval

    International Nuclear Information System (INIS)

    Neo, Y.Y.; Ufkes, E.; Kastelein, R.A.; Winter, H.V.; Cate, C. ten; Slabbekoorn, H.

    2015-01-01

    Highlights: • We exposed impulsive sounds of different repetition intervals to European seabass. • Immediate behavioural changes mirrored previous indoor & outdoor studies. • Repetition intervals influenced the impacts differentially but not the recovery. • Sound temporal patterns may be more important than some standard metrics. - Abstract: Seismic shootings and offshore pile-driving are regularly performed, emitting significant amounts of noise that may negatively affect fish behaviour. The pulse repetition interval (PRI) of these impulsive sounds may vary considerably and influence the behavioural impact and recovery. Here, we tested the effect of four PRIs (0.5–4.0 s) on European seabass swimming patterns in an outdoor basin. At the onset of the sound exposures, the fish swam faster and dived deeper in tighter shoals. PRI affected the immediate and delayed behavioural changes but not the recovery time. Our study highlights that (1) the behavioural changes of captive European seabass were consistent with previous indoor and outdoor studies; (2) PRI could influence behavioural impact differentially, which may have management implications; (3) some acoustic metrics, e.g. SEL cum , may have limited predictive power to assess the strength of behavioural impacts of noise. Noise impact assessments need to consider the contribution of sound temporal structure

  12. Microflown based monopole sound sources for reciprocal measurements

    NARCIS (Netherlands)

    Bree, H.E. de; Basten, T.G.H.

    2008-01-01

    Monopole sound sources (i.e. omni directional sound sources with a known volume velocity) are essential for reciprocal measurements used in vehicle interior panel noise contribution analysis. Until recently, these monopole sound sources use a sound pressure transducer sensor as a reference sensor. A

  13. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  14. Pacific and Atlantic herring produce burst pulse sounds.

    Science.gov (United States)

    Wilson, Ben; Batty, Robert S; Dill, Lawrence M

    2004-02-07

    The commercial importance of Pacific and Atlantic herring (Clupea pallasii and Clupea harengus) has ensured that much of their biology has received attention. However, their sound production remains poorly studied. We describe the sounds made by captive wild-caught herring. Pacific herring produce distinctive bursts of pulses, termed Fast Repetitive Tick (FRT) sounds. These trains of broadband pulses (1.7-22 kHz) lasted between 0.6 s and 7.6 s. Most were produced at night; feeding regime did not affect their frequency, and fish produced FRT sounds without direct access to the air. Digestive gas or gulped air transfer to the swim bladder, therefore, do not appear to be responsible for FRT sound generation. Atlantic herring also produce FRT sounds, and video analysis showed an association with bubble expulsion from the anal duct region (i.e. from the gut or swim bladder). To the best of the authors' knowledge, sound production by such means has not previously been described. The function(s) of these sounds are unknown, but as the per capita rates of sound production by fish at higher densities were greater, social mediation appears likely. These sounds may have consequences for our understanding of herring behaviour and the effects of noise pollution.

  15. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    Science.gov (United States)

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  16. 7 CFR 29.6036 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.6036 Section 29.6036 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Definitions § 29.6036 Sound. Free of damage. (See Rule 4.) ...

  17. A terrified-sound stress induced proteomic changes in adult male rat hippocampus.

    Science.gov (United States)

    Yang, Juan; Hu, Lili; Wu, Qiuhua; Liu, Liying; Zhao, Lingyu; Zhao, Xiaoge; Song, Tusheng; Huang, Chen

    2014-04-10

    In this study, we investigated the biochemical mechanisms in the adult rat hippocampus underlying the relationship between a terrified-sound induced psychological stress and spatial learning. Adult male rats were exposed to a terrified-sound stress, and the Morris water maze (MWM) has been used to evaluate changes in spatial learning and memory. The protein expression profile of the hippocampus was examined using two-dimensional gel electrophoresis (2DE), matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, and bioinformatics analysis. The data from the MWM tests suggested that a terrified-sound stress improved spatial learning. The proteomic analysis revealed that the expression of 52 proteins was down-regulated, while that of 35 proteins were up-regulated, in the hippocampus of the stressed rats. We identified and validated six of the most significant differentially expressed proteins that demonstrated the greatest stress-induced changes. Our study provides the first evidence that a terrified-sound stress improves spatial learning in rats, and that the enhanced spatial learning coincides with changes in protein expression in rat hippocampus. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Misconceptions About Sound Among Engineering Students

    Science.gov (United States)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  19. The Reduction of Vertical Interchannel Crosstalk: The Analysis of Localisation Thresholds for Natural Sound Sources

    Directory of Open Access Journals (Sweden)

    Rory Wallis

    2017-03-01

    Full Text Available In subjective listening tests, natural sound sources were presented to subjects as vertically-oriented phantom images from two layers of loudspeakers, ‘height’ and ‘main’. Subjects were required to reduce the amplitude of the height layer until the position of the resultant sound source matched that of the same source presented from the main layer only (the localisation threshold. Delays of 0, 1 and 10 ms were applied to the height layer with respect to the main, with vertical stereophonic and quadraphonic conditions being tested. The results of the study showed that the localisation thresholds obtained were not significantly affected by sound source or presentation method. Instead, the only variable whose effect was significant was interchannel time difference (ICTD. For ICTD of 0 ms, the median threshold was −9.5 dB, which was significantly lower than the −7 dB found for both 1 and 10 ms. The results of the study have implications both for the recording of sound sources for three-dimensional (3D audio reproduction formats and also for the rendering of 3D images.

  20. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  2. Lung sound intensity in patients with emphysema and in normal subjects at standardised airflows.

    Science.gov (United States)

    Schreur, H J; Sterk, P J; Vanderschoot, J; van Klink, H C; van Vollenhoven, E; Dijkman, J H

    1992-01-01

    BACKGROUND: A common auscultatory finding in pulmonary emphysema is a reduction of lung sounds. This might be due to a reduction in the generation of sounds due to the accompanying airflow limitation or to poor transmission of sounds due to destruction of parenchyma. Lung sound intensity was investigated in normal and emphysematous subjects in relation to airflow. METHODS: Eight normal men (45-63 years, FEV1 79-126% predicted) and nine men with severe emphysema (50-70 years, FEV1 14-63% predicted) participated in the study. Emphysema was diagnosed according to pulmonary history, results of lung function tests, and radiographic criteria. All subjects underwent phonopneumography during standardised breathing manoeuvres between 0.5 and 2 1 below total lung capacity with inspiratory and expiratory target airflows of 2 and 1 l/s respectively during 50 seconds. The synchronous measurements included airflow at the mouth and lung volume changes, and lung sounds at four locations on the right chest wall. For each microphone airflow dependent power spectra were computed by using fast Fourier transformation. Lung sound intensity was expressed as log power (in dB) at 200 Hz at inspiratory flow rates of 1 and 2 l/s and at an expiratory flow rate of 1 l/s. RESULTS: Lung sound intensity was well repeatable on two separate days, the intraclass correlation coefficient ranging from 0.77 to 0.94 between the four microphones. The intensity was strongly influenced by microphone location and airflow. There was, however, no significant difference in lung sound intensity at any flow rate between the normal and the emphysema group. CONCLUSION: Airflow standardised lung sound intensity does not differ between normal and emphysematous subjects. This suggests that the auscultatory finding of diminished breath sounds during the regular physical examination in patients with emphysema is due predominantly to airflow limitation. Images PMID:1440459

  3. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  4. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  5. Sound Equipment Fabrication and Values in Nigerian Theatre ...

    African Journals Online (AJOL)

    The main points of this paper is to discover ways of fabricating sound and sound effects equipment for theatrical productions in Nigeria have become of essence since most educational theatres cannot afford western sound and sound effects equipment. Even when available, they are old fashioned, compared to the ...

  6. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  7. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  8. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  9. Combined multibeam and LIDAR bathymetry data from eastern Long Island Sound and westernmost Block Island Sound-A regional perspective

    Science.gov (United States)

    Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Doran, E.F.

    2011-01-01

    Detailed bathymetric maps of the sea floor in Long Island Sound are of great interest to the Connecticut and New York research and management communities because of this estuary's ecological, recreational, and commercial importance. The completed, geologically interpreted digital terrain models (DTMs), ranging in area from 12 to 293 square kilometers, provide important benthic environmental information, yet many applications require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 12 multibeam and 2 LIDAR (Light Detection and Ranging) contiguous bathymetric DTMs, produced by the National Oceanic and Atmospheric Administration during charting operations, into one dataset that covers much of eastern Long Island Sound and extends into westernmost Block Island Sound. The new dataset is adjusted to mean lower low water, is gridded to 4-meter resolution, and is provided in UTM Zone 18 NAD83 and geographic WGS84 projections. This resolution is adequate for sea floor-feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the grid include exposed bedrock outcrops, boulder lag deposits of submerged moraines, sand-wave fields, and scour depressions that reflect the strength of the oscillating and asymmetric tidal currents. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic artifacts visible in the bathymetric data include a dredged channel, shipwrecks, dredge spoils, mooring anchors, prop-scour depressions, buried cables, and bridge footings. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental

  10. Sound Absorption Properties Of Single-Hole Hollow Polyester Fiber Reinforced Hydrogenated Carboxyl Nitrile Rubber Composites

    Directory of Open Access Journals (Sweden)

    Jie Hong

    2017-09-01

    Full Text Available A series of single-hole hollow polyester fiber (SHHPF reinforced hydrogenated carboxyl nitrile rubber (HXNBR composites were fabricated. In this study, the sound absorption property of the HXNBR/SHHPF composite was tested in an impedance tube, the composite morphology was characterized by scanning electron microscope (SEM, and the tensile mechanical property was measured by strength tester. The results demonstrated that a remarkable change in sound absorption can be observed by increasing the SHHPF content from 0% to 40%. In the composite with 40% SHHPF in 1 mm thickness, the sound absorption coefficient reached 0.671 at 2,500 Hz; the effective bandwidth was 1,800-2,500 Hz for sound absorption coefficient larger than 0.2. But the sound absorption property of the composite deteriorated when the SHHPF content increased to 50% in 1 mm thickness. While with 20% SHHPF proportion, the sound absorption property was improved by increasing the thickness of composites from 1 to 5 mm. Compared with the pure HXNBR of the same thickness, the tensile mechanical property of the composite improved significantly by increasing the SHHPF proportion. As a lightweight composite with excellent sound absorption property, the HXNBR/SHHPF composite has potential practical application value in the fields of engineering.

  11. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    Science.gov (United States)

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  13. Effects of Active and Passive Hearing Protection Devices on Sound Source Localization, Speech Recognition, and Tone Detection.

    Directory of Open Access Journals (Sweden)

    Andrew D Brown

    Full Text Available Hearing protection devices (HPDs such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement environments where auditory situational awareness is imperative. Here we assessed (1 sound source localization accuracy using a head-turning paradigm, (2 speech-in-noise recognition using a modified version of the QuickSIN test, and (3 tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive, including two new and previously untested devices. Relative to unoccluded (control performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.

  14. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    Science.gov (United States)

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  15. The effect of age on involuntary capture of attention by irrelevant sounds: a test of the frontal hypothesis of aging.

    Science.gov (United States)

    Andrés, Pilar; Parmentier, Fabrice B R; Escera, Carles

    2006-01-01

    The aim of this study was to examine the effects of aging on the involuntary capture of attention by irrelevant sounds (distraction) and the use of these sounds as warning cues (alertness) in an oddball paradigm. We compared the performance of older and younger participants on a well-characterized auditory-visual distraction task. Based on the dissociations observed in aging between attentional processes sustained by the anterior and posterior attentional networks, our prediction was that distraction by irrelevant novel sounds would be stronger in older adults than in young adults while both groups would be equally able to use sound as an alert to prepare for upcoming stimuli. The results confirmed both predictions: there was a larger distraction effect in the older participants, but the alert effect was equivalent in both groups. These results give support to the frontal hypothesis of aging [Raz, N. (2000). Aging of the brain and its impact on cognitive performance: integration of structural and functional finding. In F.I.M. Craik & T.A. Salthouse (Eds.) Handbook of aging and cognition (pp. 1-90). Mahwah, NJ: Erlbaum; West, R. (1996). An application of prefrontal cortex function theory to cognitive aging. Psychological Bulletin, 120, 272-292].

  16. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  17. PULSAR.MAKING VISIBLE THE SOUND OF STARS

    OpenAIRE

    Lega, Ferran

    2015-01-01

    [EN] Pulsar, making visible the sound of stars is a comunication based on a sound Installation raised as a site-specific project to show the hidden abilities of sound to generate images and patterns on the matter, using the acoustic science of cymatics. The objective of this communication will show people how through abstract and intangible sounds from celestial orbs of cosmos (radio waves generated by electromagnetic pulses from the rotation of neutrón stars), we can create ar...

  18. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  19. 7 CFR 29.2298 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.2298 Section 29.2298 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Official Standard Grades for Virginia Fire-Cured Tobacco (u.s. Type 21) § 29.2298 Sound...

  20. Cognitive Control of Involuntary Distraction by Deviant Sounds

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Hebrero, Maria

    2013-01-01

    It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…

  1. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  2. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  3. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  4. Hearing abilities and sound reception of broadband sounds in an adult Risso's dolphin (Grampus griseus).

    Science.gov (United States)

    Mooney, T Aran; Yang, Wei-Cheng; Yu, Hsin-Yi; Ketten, Darlene R; Jen, I-Fan

    2015-08-01

    While odontocetes do not have an external pinna that guides sound to the middle ear, they are considered to receive sound through specialized regions of the head and lower jaw. Yet odontocetes differ in the shape of the lower jaw suggesting that hearing pathways may vary between species, potentially influencing hearing directionality and noise impacts. This work measured the audiogram and received sensitivity of a Risso's dolphin (Grampus griseus) in an effort to comparatively examine how this species receives sound. Jaw hearing thresholds were lowest (most sensitive) at two locations along the anterior, midline region of the lower jaw (the lower jaw tip and anterior part of the throat). Responses were similarly low along a more posterior region of the lower mandible, considered the area of best hearing in bottlenose dolphins. Left- and right-side differences were also noted suggesting possible left-right asymmetries in sound reception or differences in ear sensitivities. The results indicate best hearing pathways may vary between the Risso's dolphin and other odontocetes measured. This animal received sound well, supporting a proposed throat pathway. For Risso's dolphins in particular, good ventral hearing would support their acoustic ecology by facilitating echo-detection from their proposed downward oriented echolocation beam.

  5. Visualizing Sound Directivity via Smartphone Sensors

    OpenAIRE

    Hawley, Scott H.; McClain Jr, Robert E.

    2017-01-01

    We present a fast, simple method for automated data acquisition and visualization of sound directivity, made convenient and accessible via a smartphone app, "Polar Pattern Plotter." The app synchronizes measurements of sound volume with the phone's angular orientation obtained from either compass, gyroscope or accelerometer sensors and produces a graph and exportable data file. It is generalizable to various sound sources and receivers via the use of an input-jack-adaptor to supplant the smar...

  6. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  7. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    Science.gov (United States)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  8. Loss and persistence of implicit memory for sound: evidence from auditory stream segregation context effects.

    Science.gov (United States)

    Snyder, Joel S; Weintraub, David M

    2013-07-01

    An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners' perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.

  9. Sound insulation design of modular construction housing

    OpenAIRE

    Yates, D. J.; Hughes, Lawrence; Campbell, A.

    2007-01-01

    This paper provides an insight into the acoustic issues of modular housing using the Verbus System of construction. The paper briefly summarises the history of the development of Verbus modular housing and the acoustic design considerations of the process. Results are presented from two sound insulation tests conducted during the course of the project. The results are discussed in terms of compliance with Approved Document E1 and increased performance standards such as EcoHomes2.

  10. What's in a Name? Sound Symbolism and Gender in First Names.

    Directory of Open Access Journals (Sweden)

    David M Sidhu

    Full Text Available Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b. We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms. Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2. We found that adjectives previously judged to be either descriptive of a figuratively 'round' or a 'sharp' personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning.

  11. Introduction to non-destructive testing of materials: part II

    International Nuclear Information System (INIS)

    Ahmed, M.; Ahmed, B.

    2001-01-01

    Ultrasonic waves are mechanical vibrations that require a medium, which functions as carrier. Ultrasonics are widely used in non-destructive testing of materials in which high frequency sound waves are introduced into the material being inspected. If the frequency of sound waves in within the range 10 to 20,000 Hz, the sound is audible, i.e. the range of hearing, above 20,000 Hz, the sound waves are referred to as Ultrasound or Ultrasonics. Sound waves do not cause any permanent change in material although its transient presence is very noticeable. An energy transport through a sound wave is possible only when constituent particles are connected to each other by elastic forces. Liquids and Gases are also suitable media for the transmission of sound. In vacuum no matter exists and thus no sound transmission is possible. At the end of this article advantages and limitations of ultrasonic testing are also given. (A.B.)

  12. THE SOUND OF CINEMA: TECHNOLOGY AND CREATIVITY

    Directory of Open Access Journals (Sweden)

    Poznin Vitaly F.

    2017-12-01

    Full Text Available Technology is a means of creating any product. However, in the onscreen art, it is one of the elements creating the art space of film. Considering the main stages of the development of cinematography, this article explores the influence of technology of sound recording on the creating a special artistic and physical space of film (the beginning of the use a sound in movies; the mastering the artistic means of an audiovisual work; the expansion of the spatial characteristics for the screen sound; and the sound in a modern cinema. Today, thanks to new technologies, the sound in a cinema forms a specific quasirealistic landscape, greatly enhancing the impact on the viewer of the virtual screen images.

  13. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    NARCIS (Netherlands)

    Colizoli, O.; Murre, J.M.J.; Rouw, R.

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the

  14. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  15. The sound and the fury--bees hiss when expecting danger.

    Directory of Open Access Journals (Sweden)

    Henja-Niniane Wehmann

    Full Text Available Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  16. The Sound and the Fury—Bees Hiss when Expecting Danger

    Science.gov (United States)

    Galizia, C. Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees’ sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees’ hissing remain to be investigated. PMID:25747702

  17. 7 CFR 29.2550 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.2550 Section 29.2550 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing...-Cured Tobacco (u.s. Types 22, 23, and Foreign Type 96) § 29.2550 Sound. Free of damage. [37 FR 13626...

  18. 7 CFR 29.3546 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.3546 Section 29.3546 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Type 95) § 29.3546 Sound. Free of damage. [30 FR 9207, July 23, 1965. Redesignated at 49 FR 16759, Apr...

  19. 7 CFR 29.1058 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.1058 Section 29.1058 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Type 92) § 29.1058 Sound. Free of damage. [42 FR 21092, Apr. 25, 1977. Redesignated at 47 FR 51721, Nov...

  20. 7 CFR 29.3056 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.3056 Section 29.3056 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Sound. Free of damage. [24 FR 8771, Oct. 29, 1959. Redesignated at 47 FR 51722, Nov. 17, 1982, and at 49...

  1. Classification of Normal Subjects and Pulmonary Function Disease Patients using Tracheal Respiratory Sound Detection System

    Energy Technology Data Exchange (ETDEWEB)

    Im, Jae Joong; Yi, Young Ju; Jeon, Young Ju [Chonbuk National University (Korea)

    2000-04-01

    A new auscultation system for the detection of breath sound from trachea was developed in house. Small size microphone(panasonic pin microphone) was encapsuled in a housing for resonant effect, and hardware for the sound detection was fabricated. Pulmonary function test results were compared with the parameters extracted from frequency spectrum of breath sound obtained from the developed system. Results showed that the peak frequency and relative ratio of integral values between low(80-400Hz) and high(400-800Hz) frequency ranges revealed the significant differences. Developed system could be used for distinguishing normal subject and the patients who have pulmonary disease. (author). 13 refs., 9 figs.

  2. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  3. Improving Sound Systems by Electrical Means

    OpenAIRE

    Schneider, Henrik; Andersen, Michael A. E.; Knott, Arnold

    2015-01-01

    The availability and flexibility of audio services on various digital platforms have created a high demand for a large range of sound systems. The fundamental components of sound systems such as docking stations, sound bars and wireless mobile speakers consists of a power supply, amplifiers and transducers. Due to historical reasons the design of each of these components are commonly handled separately which are indeed limiting the full performance potential of such systems. To state some exa...

  4. Juvenile Pacific Salmon in Puget Sound

    National Research Council Canada - National Science Library

    Fresh, Kurt L

    2006-01-01

    Puget sound salmon (genus Oncorhynchus) spawn in freshwater and feed, grow and mature in marine waters, During their transition from freshwater to saltwater, juvenile salmon occupy nearshore ecosystems in Puget Sound...

  5. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    Science.gov (United States)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  6. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia

    Directory of Open Access Journals (Sweden)

    Andreas eWidmann

    2012-03-01

    Full Text Available Dyslexic and control first grade school children were compared in a Symbol-to-Sound matching test based on a nonlinguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction elicited the N2b and P3a event-related potential (ERP components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.

  7. The Early Years: Becoming Attuned to Sound

    Science.gov (United States)

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  8. A Lexical Analysis of Environmental Sound Categories

    Science.gov (United States)

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  9. RASS sound speed profile (SSP) measurements for use in outdoor sound propagation models

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, S G [Physics Department, University of Auckland (New Zealand); Huenerbein, S v; Waddington, D [Research Institute for the Built and Human Environment, University of Salford (United Kingdom)], E-mail: s.vonhunerbein@salford.ac.uk

    2008-05-01

    The performance of outdoor sound propagation models depends to a great extent on meteorological input parameters. In an effort to improve speed and accuracy, model output synthetic sound speed profiles (SSP) are commonly used depending on meteorological classification schemes. In order to use SSP measured by RASS in outdoor sound propagation models, the complex profiles need to be simplified. In this paper we extend an investigation on the spatial and temporal characteristics of the meteorological data set required to yield adequate comparisons between models and field measurements, so that the models can be fairly judged. Vertical SSP from RASS, SODAR wind profiles as well as mast wind and temperature data from a flat terrain site and measured over a period of several months are used to evaluate applicability of the logarithmic approximation for a stability classification scheme proposed by the HARMONOISE working group.

  10. RASS sound speed profile (SSP) measurements for use in outdoor sound propagation models

    International Nuclear Information System (INIS)

    Bradley, S G; Huenerbein, S v; Waddington, D

    2008-01-01

    The performance of outdoor sound propagation models depends to a great extent on meteorological input parameters. In an effort to improve speed and accuracy, model output synthetic sound speed profiles (SSP) are commonly used depending on meteorological classification schemes. In order to use SSP measured by RASS in outdoor sound propagation models, the complex profiles need to be simplified. In this paper we extend an investigation on the spatial and temporal characteristics of the meteorological data set required to yield adequate comparisons between models and field measurements, so that the models can be fairly judged. Vertical SSP from RASS, SODAR wind profiles as well as mast wind and temperature data from a flat terrain site and measured over a period of several months are used to evaluate applicability of the logarithmic approximation for a stability classification scheme proposed by the HARMONOISE working group

  11. Songbirds and humans apply different strategies in a sound sequence discrimination task

    Directory of Open Access Journals (Sweden)

    Yoshimasa eSeki

    2013-07-01

    Full Text Available The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an AAB or ABB rule. The sound elements used were taken from a variety of male (M and female (F calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: 1 memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and 2 using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e. AAB and ABB; MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  12. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    Science.gov (United States)

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  13. Variation in effectiveness of a cardiac auscultation training class with a cardiology patient simulator among heart sounds and murmurs.

    Science.gov (United States)

    Kagaya, Yutaka; Tabata, Masao; Arata, Yutaro; Kameoka, Junichi; Ishii, Seiichi

    2017-08-01

    Effectiveness of simulation-based education in cardiac auscultation training is controversial, and may vary among a variety of heart sounds and murmurs. We investigated whether a single auscultation training class using a cardiology patient simulator for medical students provides competence required for clinical clerkship, and whether students' proficiency after the training differs among heart sounds and murmurs. A total of 324 fourth-year medical students (93-117/year for 3 years) were divided into groups of 6-8 students; each group participated in a three-hour training session using a cardiology patient simulator. After a mini-lecture and facilitated training, each student took two different tests. In the first test, they tried to identify three sounds of Category A (non-split, respiratory split, and abnormally wide split S2s) in random order, after being informed that they were from Category A. They then did the same with sounds of Category B (S3, S4, and S3+S4) and Category C (four heart murmurs). In the second test, they tried to identify only one from each of the three categories in random order without any category information. The overall accuracy rate declined from 80.4% in the first test to 62.0% in the second test (pauscultation training. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Science.gov (United States)

    Kaamin, Masiri; Farah Atiqah Ahmad, Nor; Ngadiman, Norhayati; Kadir, Aslila Abdul; Razali, Siti Nooraiin Mohd; Mokhtar, Mardiha; Sahat, Suhaila

    2018-03-01

    Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT). Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  15. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Directory of Open Access Journals (Sweden)

    Kaamin Masiri

    2018-01-01

    Full Text Available Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT. Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  16. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    Science.gov (United States)

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  17. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    Science.gov (United States)

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  18. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  19. Analysis of financial soundness of manufacturing companies in Indonesia Stock Exchange

    Directory of Open Access Journals (Sweden)

    Widi Hidayat

    2016-07-01

    Full Text Available This study aims to provide information to the issuer and Bapepam and Indonesian Institute of Accountants with additional important information content of ratings and financial soundness of the indicators that do not harm investors. This is an explanatory and descriptive nature of causality using quantitative methods, using all companies listed on the Indonesia Stock Exchange (ISE taken as the sample. The data were analyzed using discriminant statistical analysis tools are processed with SPSS. The results showed that the level of financial soundness of the manufacturing industries listed on the ISE such as 23 (62% Companies Current Asset Growth (CAG is low as well as Fixed Asset growth (FAG 28 (76% companies is still low, Equity Growth (EqG by 27 (73% the company, Revenue growth (RG 27 (65% companies and Net Income Growth (NIG 35 (95% firms. Two manufacturing companies have a very high NIG, thus, NIG average is very high. The seven models of financial soundness were tested based on the growth of corporate finance such as CAG, FAG, LG, EqG, RG, ExG and NIG. Only one model is not significant, the model RG, while the other model is a significant, with a significant difference be-tween the growths rates of the sound and unsound corporate finances industry groups.

  20. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  1. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    2010-01-01

    The aim of this article is to shed light on a small part of the research taking place in the textile field. The article describes an ongoing PhD research project on textiles and sound and outlines the project's two main questions: how sound can be shaped by textiles and conversely how textiles can...

  2. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  3. Constraints on decay of environmental sound memory in adult rats.

    Science.gov (United States)

    Sakai, Masashi

    2006-11-27

    When adult rats are pretreated with a 48-h-long 'repetitive nonreinforced sound exposure', performance in two-sound discriminative operant conditioning transiently improves. We have already proven that this 'sound exposure-enhanced discrimination' is dependent upon enhancement of the perceptual capacity of the auditory cortex. This study investigated principles governing decay of sound exposure-enhanced discrimination decay. Sound exposure-enhanced discrimination disappeared within approximately 72 h if animals were deprived of environmental sounds after sound exposure, and that shortened to less than approximately 60 h if they were exposed to environmental sounds in the animal room. Sound-deprivation itself exerted no clear effects. These findings suggest that the memory of a passively exposed behaviorally irrelevant sound signal does not merely pass along the intrinsic lifetime but also gets deteriorated by other incoming signals.

  4. Game Sound from Behind the Sofa

    DEFF Research Database (Denmark)

    Garner, Tom Alexander

    2013-01-01

    The central concern of this thesis is upon the processes by which human beings perceive sound and experience emotions within a computer video gameplay context. The potential of quantitative sound parameters to evoke and modulate emotional experience is explored, working towards the development...... that provide additional support of the hypothetical frameworks: an ecological process of fear, a fear-related model of virtual and real acoustic ecologies, and an embodied virtual acoustic ecology framework. It is intended that this thesis will clearly support more effective and efficient sound design...... practices and also improve awareness of the capacity of sound to generate significant emotional experiences during computer video gameplay. It is further hoped that this thesis will elucidate the potential of biometrics/psychophysiology to allow game designers to better understand the player and to move...

  5. 33 CFR 67.20-10 - Sound signal.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signal. 67.20-10 Section 67... AIDS TO NAVIGATION ON ARTIFICIAL ISLANDS AND FIXED STRUCTURES Class âAâ Requirements § 67.20-10 Sound signal. (a) The owner of a Class “A” structure shall: (1) Install a sound signal that has a rated range...

  6. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  7. Opo lidar sounding of trace atmospheric gases in the 3 – 4 μm spectral range

    Directory of Open Access Journals (Sweden)

    Romanovskii Oleg A.

    2018-01-01

    Full Text Available The applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO generation to lidar sounding of the atmosphere in the spectral range 3–4 μm is studied in this work. A technique developed for lidar sounding of trace atmospheric gases (TAG is based on differential absorption lidar (DIAL method and differential optical absorption spectroscopy (DOAS. The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. The numerical simulation performed shows that a KTA-based OPO laser is a promising source of radiation for remote DIAL-DOAS sounding of the TAGs under study along surface tropospheric paths. A possibility of using a PD38-03-PR photodiode for the DIAL gas analysis of the atmosphere is shown.

  8. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes; Schallstrahlungskontrast in MR-Phasenbildern. Methode zur Darstellung von Elastizitaets-, Schalldaempfungs- und Schallimpedanzaenderungen

    Energy Technology Data Exchange (ETDEWEB)

    Radicke, Marcus

    2009-12-18

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm{sup 2} and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [German] Die in dieser Arbeit praesentierte Methode kombiniert Ultraschalltechniken mit der Magnetresonanztomographie (MRT). Eine Ultraschallwelle ruft in absorbierenden Medien eine statische Kraft in Schallausbreitungsrichtung hervor. Die Kraft fuehrt bei Schallintensitaeten von einigen W/cm{sup 2} und einer Schallfrequenz im niederen MHz-Bereich zu einer Gewebeverschiebung im Mikrometerbereich. Diese Gewebeverschiebung haengt ab von der Schallleistung, der Schallfrequenz, der Schallabsorption und den elastischen Eigenschaften des Gewebes. Es wurde eine MRT-Sequenz der Siemens Healthcare AG modifiziert, so dass sie (indirekt) die Gewebeverschiebung misst, als Grauwerte kodiert und als 2D-Bild darstellt. Anhand der Grauwerte kann der Schallstrahlverlauf in dem Gewebe visualisiert werden, und so koennen zusaetzlich Schallhindernisse (Aenderungen der Schallkennimpedanz) aufgespuert werden. Mit den

  9. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  10. Coastal Habitats in Puget Sound: A Research Plan in Support of the Puget Sound Nearshore Partnership

    Science.gov (United States)

    2006-11-01

    2Puget Sound nearshore ecosystems encompass the bluffs, beaches, tide flats, estuaries, rocky shores, lagoons , salt marshes, and other shoreline features...investigations of intertidal benthic macroinvertebrate assemblages along Puget Sound and the Strait of Juan de Fuca (e.g., Long and others, 1983

  11. An analysis of collegiate band directors' exposure to sound pressure levels

    Science.gov (United States)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  12. Sound-proof Sandwich Panel Design via Metamaterial Concept

    Science.gov (United States)

    Sui, Ni

    Sandwich panels consisting of hollow core cells and two face-sheets bonded on both sides have been widely used as lightweight and strong structures in practical engineering applications, but with poor acoustic performance especially at low frequency regime. Basic sound-proof methods for the sandwich panel design are spontaneously categorized as sound insulation and sound absorption. Motivated by metamaterial concept, this dissertation presents two sandwich panel designs without sacrificing weight or size penalty: A lightweight yet sound-proof honeycomb acoustic metamateiral can be used as core material for honeycomb sandwich panels to block sound and break the mass law to realize minimum sound transmission; the other sandwich panel design is based on coupled Helmholtz resonators and can achieve perfect sound absorption without sound reflection. Based on the honeycomb sandwich panel, the mechanical properties of the honeycomb core structure were studied first. By incorporating a thin membrane on top of each honeycomb core, the traditional honeycomb core turns into honeycomb acoustic metamaterial. The basic theory for such kind of membrane-type acoustic metamaterial is demonstrated by a lumped model with infinite periodic oscillator system, and the negative dynamic effective mass density for clamped membrane is analyzed under the membrane resonance condition. Evanescent wave mode caused by negative dynamic effective mass density and impedance methods are utilized to interpret the physical phenomenon of honeycomb acoustic metamaterials at resonance. The honeycomb metamaterials can extraordinarily improve low-frequency sound transmission loss below the first resonant frequency of the membrane. The property of the membrane, the tension of the membrane and the numbers of attached membranes can impact the sound transmission loss, which are observed by numerical simulations and validated by experiments. The sandwich panel which incorporates the honeycomb metamateiral as

  13. Analysis of radiation fields in tomography on diffusion gaseous sound

    International Nuclear Information System (INIS)

    Bekman, I.N.

    1999-01-01

    Perspectives of application of equilibrium and stationary variants of diffusion tomography with radioactive gaseous sounds for spatial reconstruction of heterogeneous media in materials technology were considered. The basic attention were allocated to creation of simple algorithms of detection of sound accumulation on the background of monotonically varying concentration field. Algorithms of transformation of two-dimensional radiation field in three-dimensional distribution of radiation sources were suggested. The methods of analytical elongation of concentration field permitting separation of regional anomalies on the background of local ones and vice verse were discussed. It was shown that both equilibrium and stationary variants of diffusion tomography detect the heterogeneity of testing material, provide reduction of spatial distribution of elements of its structure and give an estimation of relative degree of defectiveness

  14. Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.

    Science.gov (United States)

    Kirchberger, Martin; Russo, Frank A

    2016-02-01

    A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.

  15. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  16. Refinement, testing, and application of an Integrated Data Assimilation/Sounding System (IDASS) for the DOE/ARM Experimental Program. Final report for period September 20, 1990 - May 8, 1997

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, David B.

    2002-04-09

    This report describes work done by NCAR under the ''Refinement, Testing, and Application of an Integrated Data Assimilation/Sounding System (IDASS) for the DOE/ARM Experimental Program''. It includes a discussion of the goals, findings and a list of 27 journal articles, 92 non-refereed papers and 30 other presentations not associated with a formal publication.

  17. Interactively Evolving Compositional Sound Synthesis Networks

    DEFF Research Database (Denmark)

    Jónsson, Björn Þór; Hoover, Amy K.; Risi, Sebastian

    2015-01-01

    the space of potential sounds that can be generated through such compositional sound synthesis networks (CSSNs). To study the effect of evolution on subjective appreciation, participants in a listener study ranked evolved timbres by personal preference, resulting in preferences skewed toward the first......While the success of electronic music often relies on the uniqueness and quality of selected timbres, many musicians struggle with complicated and expensive equipment and techniques to create their desired sounds. Instead, this paper presents a technique for producing novel timbres that are evolved...

  18. Assessing and optimizing infra-sound networks to monitor volcanic eruptions

    International Nuclear Information System (INIS)

    Tailpied, Dorianne

    2016-01-01

    Understanding infra-sound signals is essential to monitor compliance with the Comprehensive Nuclear-Test ban Treaty, and also to demonstrate the potential of the global monitoring infra-sound network for civil and scientific applications. The main objective of this thesis is to develop a robust tool to estimate and optimize the performance of any infra-sound network to monitor explosive sources such as volcanic eruptions. Unlike previous studies, the developed method has the advantage to consider realistic atmospheric specifications along the propagation path, source frequency and noise levels at the stations. It allows to predict the attenuation and the minimum detectable source amplitude. By simulating the performances of any infra-sound networks, it is then possible to define the optimal configuration of the network to monitor a specific region, during a given period. When carefully adding a station to the existing network, performance can be improved by a factor of 2. However, it is not always possible to complete the network. A good knowledge of detection capabilities at large distances is thus essential. To provide a more realistic picture of the performance, we integrate the atmospheric longitudinal variability along the infra-sound propagation path in our simulations. This thesis also contributes in providing a confidence index taking into account the uncertainties related to propagation and atmospheric models. At high frequencies, the error can reach 40 dB. Volcanic eruptions are natural, powerful and valuable calibrating sources of infra-sound, worldwide detected. In this study, the well instrumented volcanoes Yasur, in Vanuatu, and Etna, in Italy, offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near-field recordings and far-field detections of these volcanoes have helped to highlight the potential of our simulation tool to remotely monitor volcanoes. Such work could significantly help to prevent

  19. The sound of high winds. The effect of atmospheric stability on wind turbine sound and microphone noise

    International Nuclear Information System (INIS)

    Van den Berg, G.P.

    2006-01-01

    In this thesis issues are raised concerning wind turbine noise and its relationship to altitude dependent wind velocity. The following issues are investigated: what is the influence of atmospheric stability on the speed and sound power of a wind turbine?; what is the influence of atmospheric stability on the character of wind turbine sound?; how widespread is the impact of atmospheric stability on wind turbine performance: is it relevant for new wind turbine projects; how can noise prediction take this stability into account?; what can be done to deal with the resultant higher impact of wind turbine sound? Apart from these directly wind turbine related issues, a final aim was to address a measurement problem: how does wind on a microphone affect the measurement of the ambient sound level?

  20. A Comparative Study of Sound Speed in Air at Room Temperature between a Pressure Sensor and a Sound Sensor

    Science.gov (United States)

    Amrani, D.

    2013-01-01

    This paper deals with the comparison of sound speed measurements in air using two types of sensor that are widely employed in physics and engineering education, namely a pressure sensor and a sound sensor. A computer-based laboratory with pressure and sound sensors was used to carry out measurements of air through a 60 ml syringe. The fast Fourier…

  1. On-Chip electric power generation system from sound of portable music plyers and smartphones towerd portable uTAS

    NARCIS (Netherlands)

    Naito, T.; Kaji, N.; le Gac, Severine; Tokeshi, M.; van den Berg, Albert; Baba, Y.; Fujii, T.; Hibara, A.; Takeuchi, S.; Fukuba, T.

    2012-01-01

    This paper demonstrates electric generation from sound to minimize and integrate microfluidic systems for point of care testing or in-situ analysis. In this work, 5.4 volts and 50 mW DC was generated from sound through an earphone cable, which is a versatile system and able to actuate small size and

  2. Impact of adding artificially generated alert sound to hybrid electric vehicles on their detectability by pedestrians who are blind.

    Science.gov (United States)

    Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh; Pliskow, Jay; Myers, Kyle

    2012-01-01

    A repeated-measures design with block randomization was used for the study, in which 14 adults with visual impairments attempted to detect three different vehicles: a hybrid electric vehicle (HEV) with an artificially generated sound (Vehicle Sound for Pedestrians [VSP]), an HEV without the VSP, and a comparable internal combustion engine (ICE) vehicle. The VSP vehicle (mean +/- standard deviation [SD] = 38.3 +/- 14.8 m) was detected at a significantly farther distance than the HEV (mean +/- SD = 27.5 +/- 11.5 m), t = 4.823, p vehicles (mean +/- SD = 34.5 +/- 14.3 m), t = 1.787, p = 0.10. Despite the overall sound level difference between the two test sites (parking lot = 48.7 dBA, roadway = 55.1 dBA), no significant difference in detection distance between the test sites was observed, F(1, 13) = 0.025, p = 0.88. No significant interaction was found between the vehicle type and test site, F(1.31, 16.98) = 0.272, p = 0.67. The findings of the study may help us understand how adding an artificially generated sound to an HEV could affect some of the orientation and mobility tasks performed by blind pedestrians.

  3. Bubbles That Change the Speed of Sound

    Science.gov (United States)

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  4. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  5. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    Science.gov (United States)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  6. Assessment and improvement of sound quality in cochlear implant users.

    Science.gov (United States)

    Caldwell, Meredith T; Jiam, Nicole T; Limb, Charles J

    2017-06-01

    Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant-mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI-MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. NA.

  7. Plastic modes of listening: affordance in constructed sound environments

    Science.gov (United States)

    Sjolin, Anders

    This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.

  8. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  9. Imagination, Perceptual Engagement and Sound Mediation. Thinking Technologically-Produced Sound Through Simondon's Concept of the Image

    NARCIS (Netherlands)

    Paiuk, G.

    2018-01-01

    Applying French philosopher Gilbert Simondon’s concept of image to the domain of the sonorous, this article aims to tackle how imagination is constitutional in our grasp of sound, and how this grasp is informed by the protocols and affordances of technological tools of sound reproduction and

  10. Noise and noise disturbances from wind power plants - Tests with interactive control of sound parameters for more comfortable and less perceptible sounds; Buller och bullerstoerningar fraan vindkraftverk - Foersoek med interaktiv styrning av ljudparametrar foer behagligare och mindre maerkbara ljud

    Energy Technology Data Exchange (ETDEWEB)

    Persson-Waye, K.; Oehrstroem, E.; Bjoerkman, M.; Agge, A. [Goeteborg Univ. (Sweden). Dept. of Environmental Medicine

    2001-12-01

    In experimental pilot studies, a methodology has been worked out for interactively varying sound parameters in wind power plants. In the tests, 24 persons varied the center frequency of different band-widths, the frequency of a sinus-tone and the amplitude-modulation of a sinus-tone in order to create as comfortable a sound as possible. The variations build on the noise from the two wind turbines Bonus and Wind World. The variations were performed with a constant dba level. The results showed that the majority preferred a low-frequency tone (94 Hz and 115 Hz for Wind World and Bonus, respectively). The mean of the most comfortable amplitude-modulation varied between 18 and 22 Hz, depending on the ground frequency. The mean of the center-frequency for the different band-widths varied from 785 to 1104 Hz. In order to study the influence of the wind velocity on the acoustic character of the noise, a long-time measurement program has been performed. A remotely controlled system has been developed, where wind velocity, wind direction, temperature and humidity are registered simultaneously with the noise. Long-time registrations have been performed for four different wing turbines.

  11. Sound exposure measurements using hearing-aid technology

    DEFF Research Database (Denmark)

    Jensen, Simon Boelt; Drastrup, Mads; Morales, Esteban Chávez

    2016-01-01

    scenarios. The purpose of this work is to document the use of a modified behind-the-ear (BTE) hearing-aid as a portable sound pressure level (SPL) meter. In order to obtain sound level measurements with a BTE device comparable to sound field values that can be used with existing risk assessment strategies...... levels of sound exposures are experienced in modern society in many different situations such as attending concerts, sport events and others. This leads to an interest in measurement devices which are discreet and simple to use, in order to assess sound exposures encountered in typical daily life......, differences due to microphone positions and the presence of a person in the measurement must be taken into account. The present study presents measurements carried out to document the characteristics of the BTE device, using the same framework presented in the ISO 11904 standard series. The responses...

  12. Semi-continuous ultrasonic sounding and changes of ultrasonic signal characteristics as a sensitive tool for the evaluation of ongoing microstructural changes of experimental mortar bars tested for their ASR potential

    Czech Academy of Sciences Publication Activity Database

    Lokajíček, Tomáš; Kuchařová, A.; Petružálek, Matěj; Šachlová, Š.; Svitek, Tomáš; Přikryl, R.

    2016-01-01

    Roč. 71, September (2016), s. 40-50 ISSN 0041-624X R&D Projects: GA ČR(CZ) GAP104/12/0915 Institutional support: RVO:67985831 Keywords : alkali-silica reaction * accelerated test * thermal heating * mortar bar * ultrasonic sounding Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.327, year: 2016

  13. 33 CFR 167.1700 - In Prince William Sound: General.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: General... Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...

  14. First and second sound in He films

    International Nuclear Information System (INIS)

    Oh, H.G.; Um, C.I.; Kahng, W.H.; Isihara, A.

    1986-01-01

    In consideration of a collision integral in the Boltzmann equation and with use of kinetic and hydrodynamical equations, the velocities of the first and second sound in liquid 4 He films are evaluated as functions of temperature, and the attenuation coefficients are obtained. The second sound is 2/sup -1/2/ times the first-sound velocity in the low-temperature and low-frequency limit

  15. Lexical and perceptual grounding of a sound ontology

    NARCIS (Netherlands)

    Lobanova, Anna; Spenader, Jennifer; Valkenier, Bea; Matousek,; Mautner, P

    2007-01-01

    Sound ontologies need to incorporate source unidentifiable sounds in an adequate and consistent manner. Computational lexical resources like WordNet have either inserted these descriptions into conceptual categories, or make no attempt to organize the terms for these sounds. This work attempts to

  16. 33 CFR 334.412 - Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. 334.412 Section 334.412 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE DANGER ZONE AND RESTRICTED AREA...

  17. Sound response of superheated drop bubble detectors to neutrons

    International Nuclear Information System (INIS)

    Gao Size; Chen Zhe; Liu Chao; Ni Bangfa; Zhang Guiying; Zhao Changfa; Xiao Caijin; Liu Cunxiong; Nie Peng; Guan Yongjing

    2012-01-01

    The sound response of the bubble detectors to neutrons by using 252 Cf neutron source was described. Sound signals were filtered by sound card and PC. The short-time signal energy. FFT spectrum, power spectrum, and decay time constant were got to determine the authenticity of sound signal for bubbles. (authors)

  18. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  19. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  20. Sound Insulation Property Study on Nylon 66 Scrim Reinforced PVF Laminated Membranes and their Composite Sound Proof Structure

    Science.gov (United States)

    Chen, Lihe; Chen, Zhaofeng; Zhang, Xinyang; Wang, Weiwei

    2018-01-01

    In this paper, we investigated the sound insulation property of nylon 66 scrim reinforced PVF laminated membranes and their corresponding composite structures with glass fiber felt and carbon fiber board. Sound transmission loss (STL) was measured by standing wave tube method. The results show that, with the decrease of nylon 66 gridlines spacing, STL of nylon 66 scrim reinforced PVF laminated membranes was improved. The sound insulation performance of laminated membranes with gridlines spacing of 3mm is the best, whose STL was up to 10dB at 6.3 kHz. Besides, STL was improved effectively as air layers were embedded into the composite sound proof construction consist of laminated membrane, glass fiber felt and carbon fiber board.