WorldWideScience

Sample records for sound location induces

  1. Selective attention to sound location or pitch studied with fMRI.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  2. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  3. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  4. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  5. Low frequency eardrum directionality in the barn owl induced by sound transmission through the interaural canal

    DEFF Research Database (Denmark)

    Kettler, Lutz; Christensen-Dalsgaard, Jakob; Larsen, Ole Næsbye

    2016-01-01

    . Significant sound transmission across the interaural canal occurred at low frequencies. The sound transmission induces considerable eardrum directionality in a narrow band from 1.5 to 3.5 kHz. This is below the frequency range used by the barn owl for locating prey, but may conceivably be used for locating...

  6. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  7. Efficiency of vibrational sounding in parasitoid host location depends on substrate density.

    Science.gov (United States)

    Fischer, S; Samietz, J; Dorn, S

    2003-10-01

    Parasitoids of concealed hosts have to drill through a substrate with their ovipositor for successful parasitization. Hymenopteran species in this drill-and-sting guild locate immobile pupal hosts by vibrational sounding, i.e., echolocation on solid substrate. Although this host location strategy is assumed to be common among the Orussidae and Ichneumonidae there is no information yet whether it is adapted to characteristics of the host microhabitat. This study examined the effect of substrate density on responsiveness and host location efficiency in two pupal parasitoids, Pimpla turionellae and Xanthopimpla stemmator (Hymenoptera: Ichneumonidae), with different host-niche specialization and corresponding ovipositor morphology. Location and frequency of ovipositor insertions were scored on cylindrical plant stem models of various densities. Substrate density had a significant negative effect on responsiveness, number of ovipositor insertions, and host location precision in both species. The more niche-specific species X. stemmator showed a higher host location precision and insertion activity. We could show that vibrational sounding is obviously adapted to the host microhabitat of the parasitoid species using this host location strategy. We suggest the attenuation of pulses during vibrational sounding as the energetically costly limiting factor for this adaptation.

  8. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  9. A Generalized Model for Indoor Location Estimation Using Environmental Sound from Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2018-02-01

    Full Text Available The indoor location of individuals is a key contextual variable for commercial and assisted location-based services and applications. Commercial centers and medical buildings (e.g., hospitals require location information of their users/patients to offer the services that are needed at the correct moment. Several approaches have been proposed to tackle this problem. In this paper, we present the development of an indoor location system which relies on the human activity recognition approach, using sound as an information source to infer the indoor location based on the contextual information of the activity that is realized at the moment. In this work, we analyze the sound information to estimate the location using the contextual information of the activity. A feature extraction approach to the sound signal is performed to feed a random forest algorithm in order to generate a model to estimate the location of the user. We evaluate the quality of the resulting model in terms of sensitivity and specificity for each location, and we also perform out-of-bag error estimation. Our experiments were carried out in five representative residential homes. Each home had four individual indoor rooms. Eleven activities (brewing coffee, cooking, eggs, taking a shower, etc. were performed to provide the contextual information. Experimental results show that developing an indoor location system (ILS that uses contextual information from human activities (identified with data provided from the environmental sound can achieve an estimation that is 95% correct.

  10. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  11. Sound source location in cavitating tip vortices

    International Nuclear Information System (INIS)

    Higuchi, H.; Taghavi, R.; Arndt, R.E.A.

    1985-01-01

    Utilizing an array of three hydrophones, individual cavitation bursts in a tip vortex could be located. Theoretically, four hydrophones are necessary. Hence the data from three hydrophones are supplemented with photographic observation of the cavitating tip vortex. The cavitation sound sources are found to be localized to within one base chord length from the hydrofoil tip. This appears to correspond to the region of initial tip vortex roll-up. A more extensive study with a four sensor array is now in progress

  12. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    Science.gov (United States)

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The sound-induced phosphene illusion.

    Science.gov (United States)

    Bolognini, Nadia; Convento, Silvia; Fusaro, Martina; Vallar, Giuseppe

    2013-12-01

    Crossmodal illusions clearly show how perception, rather than being a modular and self-contained function, can be dramatically altered by interactions between senses. Here, we provide evidence for a novel crossmodal "physiological" illusion, showing that sounds can boost visual cortical responses in such a way to give rise to a striking illusory visual percept. In healthy participants, a single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital cortex evoked a visual percept, i.e., a phosphene. When sTMS is accompanied by two auditory beeps, the second beep induces in neurologically unimpaired participants the perception of an illusory second phosphene, namely the sound-induced phosphene illusion. This perceptual "fission" of a single phosphene, due to multiple beeps, is not matched by a "fusion" of double phosphenes due to a single beep, and it is characterized by an early auditory modulation of the TMS-induced visual responses (~80 ms). Multiple beeps also induce an illusory feeling of multiple TMS pulses on the participants' scalp, consistent with an audio-tactile fission illusion. In conclusion, an auditory stimulation may bring about a phenomenological change in the conscious visual experience produced by the transcranial stimulation of the occipital cortex, which reveals crossmodal binding mechanisms within early stages of visual processing.

  14. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  15. A location procedure for sound sources in reactor-technical enclosures

    International Nuclear Information System (INIS)

    Hamann, D.

    1982-07-01

    A passive method requiring one detector only has been developed for the location of sound emitting faults in nuclear power plant components. It is adapted for use in a frequency range the wavelength of which is of the same order of magnitude as characteristic dimensions of the considered enclosure. The location is performed in the following way: (1) For a fixed detector position the Auto Power Spectral Density (APSD) of the source to be located is measured. (2) For this detector position the APSD is calculated for the potential source locations. For this, the free-field APSD as well as the acoustic normal modes of the enclosure are necessary. (3) The measured APSD is compared with the theoretically obtained APSD's. (4) That APSD is determined which is most similar to the measured APSD, and consequently an information about the unknown source position is got. (author)

  16. Winter sound-level characterization of the Deaf Smith County location in the Palo Duro Basin, Texas

    International Nuclear Information System (INIS)

    1984-03-01

    A description of sound levels and sound sources in the Deaf Smith County location in the Palo Duro Basin during a period representative of the winter season is presented. Data were collected during the period February 26 through March 1, 1983. 4 references, 1 figure, 3 tables

  17. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  18. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Summer sound-level characterization of the Deaf Smith County and Swisher County locations in the Palo Duro Basin, Texas

    International Nuclear Information System (INIS)

    1984-03-01

    A description of sound levels and sound sources in the Deaf Smith County and Swisher County locations in the Palo Duro Basin during a period representative of the summer season is presented. Included are data collected during the period August 4 through 8, 1982, for both locations. 3 references, 2 figures, 3 tables

  20. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  1. Correlation Between Resting Testosterone/Cortisol Ratio and Sound-Induced Vasoconstriction at Fingertip in Men.

    Science.gov (United States)

    Ooishi, Yuuki

    2018-01-01

    A sound-induced sympathetic tone has been used as an index for orienting responses to auditory stimuli. The resting testosterone/cortisol ratio is a biomarker of social aggression that drives an approaching behavior in response to environmental stimuli, and a higher testosterone level and a lower cortisol level can facilitate the sympathetic response to environmental stimuli. Therefore, it is possible that the testosterone/cortisol ratio is correlated with the sound-induced sympathetic tone. The current study investigated the relationship between the resting testosterone/cortisol ratio and vasoconstriction induced by listening to sound stimuli. Twenty healthy males aged 29.0 ± 0.53 years (mean ± S.E.M) participated in the study. They came to the laboratory for 3 days and listened to one of three types of sound stimuli for 1 min on each day. Saliva samples were collected for an analysis of salivary testosterone and cortisol levels on the day of each experiment. After the collecting the saliva sample, we measured the blood volume pulse (BVP) amplitude at a fingertip. Since vasoconstriction is mediated by the activation of the sympathetic nerves, the strength of the reduction in BVP amplitude at a fingertip was called the BVP response (finger BVPR). No difference was observed between the sound-induced finger BVPR for the three types of sound stimuli ( p = 0.779). The correlation coefficient between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio within participants was significantly different from no correlation ( p = 0.011) and there was a trend toward a significance in the correlation between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio between participants ( r = 0.39, p = 0.088). These results suggest that the testosterone/cortisol ratio affects the difference in the sound-evoked sympathetic response.

  2. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    Science.gov (United States)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  3. Quench dynamics in SRF cavities: can we locate the quench origin with 2nd sound?

    International Nuclear Information System (INIS)

    Maximenko, Yulia; Segatskov, Dmitri A.

    2011-01-01

    A newly developed method of locating quenches in SRF cavities by detecting second-sound waves has been gaining popularity in SRF laboratories. The technique is based on measurements of time delays between the quench as determined by the RF system and arrival of the second-sound wave to the multiple detectors placed around the cavity in superfluid helium. Unlike multi-channel temperature mapping, this approach requires only a few sensors and simple readout electronics; it can be used with SRF cavities of almost arbitrary shape. One of its drawbacks is that being an indirect method it requires one to solve an inverse problem to find the location of a quench. We tried to solve this inverse problem by using a parametric forward model. By analyzing the data we found that the approximation where the second-sound emitter is a near-singular source does not describe the physical system well enough. A time-dependent analysis of the quench process can help us to put forward a more adequate model. We present here our current algorithm to solve the inverse problem and discuss the experimental results.

  4. The effects of intervening interference on working memory for sound location as a function of inter-comparison interval.

    Science.gov (United States)

    Ries, Dennis T; Hamilton, Traci R; Grossmann, Aurora J

    2010-09-01

    This study examined the effects of inter-comparison interval duration and intervening interference on auditory working memory (AWM) for auditory location. Interaural phase differences were used to produce localization cues for tonal stimuli and the difference limen for interaural phase difference (DL-IPD) specified as the equivalent angle of incidence between two sound sources was measured in five different conditions. These conditions consisted of three different inter-comparison intervals [300 ms (short), 5000 ms (medium), and 15,000 ms (long)], the medium and long of which were presented both in the presence and absence of intervening tones. The presence of intervening stimuli within the medium and long inter-comparison intervals produced a significant increase in the DL-IPD compared to the medium and long inter-comparison intervals condition without intervening tones. The result obtained in the condition with a short inter-comparison interval was roughly equivalent to that obtained for the medium inter-comparison interval without intervening tones. These results suggest that the ability to retain information about the location of a sound within AWM decays slowly; however, the presence of intervening sounds readily disrupts the retention process. Overall, the results suggest that the temporal decay of information within AWM regarding the location of a sound from a listener's environment is so gradual that it can be maintained in trace memory for tens of seconds in the absence of intervening acoustic signals. Conversely, the presence of intervening sounds within the retention interval may facilitate the use of context memory, even for shorter retention intervals, resulting in a less detailed, but relevant representation of the location that is resistant to further degradation. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    Science.gov (United States)

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  6. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  7. Active control of turbulent boundary layer-induced sound transmission through the cavity-backed double panels

    Science.gov (United States)

    Caiazzo, A.; Alujević, N.; Pluymers, B.; Desmet, W.

    2018-05-01

    This paper presents a theoretical study of active control of turbulent boundary layer (TBL) induced sound transmission through the cavity-backed double panels. The aerodynamic model used is based on the Corcos wall pressure distribution. The structural-acoustic model encompasses a source panel (skin panel), coupled through an acoustic cavity to the radiating panel (trim panel). The radiating panel is backed by a larger acoustic enclosure (the back cavity). A feedback control unit is located inside the acoustic cavity between the two panels. It consists of a control force actuator and a sensor mounted at the actuator footprint on the radiating panel. The control actuator can react off the source panel. It is driven by an amplified velocity signal measured by the sensor. A fully coupled analytical structural-acoustic model is developed to study the effects of the active control on the sound transmission into the back cavity. The stability and performance of the active control system are firstly studied on a reduced order model. In the reduced order model only two fundamental modes of the fully coupled system are assumed. Secondly, a full order model is considered with a number of modes large enough to yield accurate simulation results up to 1000 Hz. It is shown that convincing reductions of the TBL-induced vibrations of the radiating panel and the sound pressure inside the back cavity can be expected. The reductions are more pronounced for a certain class of systems, which is characterised by the fundamental natural frequency of the skin panel larger than the fundamental natural frequency of the trim panel.

  8. A terrified-sound stress induced proteomic changes in adult male rat hippocampus.

    Science.gov (United States)

    Yang, Juan; Hu, Lili; Wu, Qiuhua; Liu, Liying; Zhao, Lingyu; Zhao, Xiaoge; Song, Tusheng; Huang, Chen

    2014-04-10

    In this study, we investigated the biochemical mechanisms in the adult rat hippocampus underlying the relationship between a terrified-sound induced psychological stress and spatial learning. Adult male rats were exposed to a terrified-sound stress, and the Morris water maze (MWM) has been used to evaluate changes in spatial learning and memory. The protein expression profile of the hippocampus was examined using two-dimensional gel electrophoresis (2DE), matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, and bioinformatics analysis. The data from the MWM tests suggested that a terrified-sound stress improved spatial learning. The proteomic analysis revealed that the expression of 52 proteins was down-regulated, while that of 35 proteins were up-regulated, in the hippocampus of the stressed rats. We identified and validated six of the most significant differentially expressed proteins that demonstrated the greatest stress-induced changes. Our study provides the first evidence that a terrified-sound stress improves spatial learning in rats, and that the enhanced spatial learning coincides with changes in protein expression in rat hippocampus. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Constraint-induced sound therapy for sudden sensorineural hearing loss – behavioral and neurophysiological outcomes

    OpenAIRE

    Hidehiko Okamoto; Munehisa Fukushima; Henning Teismann; Lothar Lagemann; Tadashi Kitahara; Hidenori Inohara; Ryusuke Kakigi; Christo Pantev

    2014-01-01

    Sudden sensorineural hearing loss is characterized by acute, idiopathic hearing deterioration. We report here the development and evaluation of “constraint-induced sound therapy”, which is based on a well-established neuro-rehabilitation approach, and which is characterized by the plugging of the intact ear (“constraint”) and the simultaneous, extensive stimulation of the affected ear with music. The sudden sensorineural hearing loss patients who received the constraint-induced sound therapy ...

  10. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Parietal disruption alters audiovisual binding in the sound-induced flash illusion.

    Science.gov (United States)

    Kamke, Marc R; Vieth, Harrison E; Cottrell, David; Mattingley, Jason B

    2012-09-01

    Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. 10 Hz Amplitude Modulated Sounds Induce Short-Term Tinnitus Suppression

    Directory of Open Access Journals (Sweden)

    Patrick Neff

    2017-05-01

    noise: t(27 = −4.22, p < 0.0001]. Finally, variants of the AM sound matched to the tinnitus frequency reduced in sound level resulted in less suppression while there was no significant difference observed for a longer stimulation duration. Moreover, feasibility of the overall procedure could be confirmed as scores of both tinnitus loudness and questionnaires were lower after the experiment [tinnitus loudness: t(27 = 2.77, p < 0.01; Tinnitus Questionnaire: t(27 = 2.06, p < 0.05; Tinnitus Handicap Inventory: t(27 = 1.92, p = 0.065].Conclusion: Taken together, these results imply that AM sounds, especially in or around the tinnitus frequency, may induce larger suppression than unmodulated sounds. Future studies should thus evaluate this approach in longitudinal studies and real life settings. Furthermore, the putative neural relation of these sound stimuli with a modulation rate in the EEG α band to the observed tinnitus suppression should be probed with respective neurophysiological methods.

  13. Sound-induced facial synkinesis following facial nerve paralysis.

    Science.gov (United States)

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  14. Virtual nature environment with nature sound exposure induce stress recovery by enhanced parasympathetic activity

    DEFF Research Database (Denmark)

    Annerstedt, Matilda; Jönsson, Peter; Wallergård, Mattias

    2013-01-01

    . The group that recovered in virtual nature without sound and the control group displayed no particular autonomic activation or deactivation. The results demonstrate a potential mechanistic link between nature, the sounds of nature, and stress recovery, and suggest the potential importance of virtual reality......Experimental research on stress recovery in natural environments is limited, as is study of the effect of sounds of nature. After inducing stress by means of a virtual stress test, we explored physiological recovery in two different virtual natural environments (with and without exposure to sounds...... of nature) and in one control condition. Cardiovascular data and saliva cortisol were collected. Repeated ANOVA measurements indicated parasympathetic activation in the group subjected to sounds of nature in a virtual natural environment, suggesting enhanced stress recovery may occur in such surroundings...

  15. Measurement of acoustic characteristics of Japanese Buddhist temples in relation to sound source location and direction.

    Science.gov (United States)

    Soeta, Yoshiharu; Shimokura, Ryota; Kim, Yong Hee; Ohsawa, Tomohiro; Ito, Ken

    2013-05-01

    Although temples are important buildings in the Buddhist community, the acoustic quality has not been examined in detail. Buddhist monks change the location and direction according to the ceremony, and associated acoustical changes have not yet been examined scientifically. To discuss the desired acoustics of temples, it is necessary to know the acoustic characteristics appropriate for each phase of a ceremony. In this study, acoustic measurements were taken at various source locations and directions in Japanese temples. A directional loudspeaker was used as the source to provide vocal acoustic fields, and impulse responses were measured and analyzed. The speech transmission index was higher and the interaural cross-correlation coefficient was lower for the sound source directed toward the side wall than that directed toward the altar. This suggests that the change in direction improves speech intelligibility, and the asymmetric property of direct sound and complex reflections from the altar and side wall increases the apparent source width. The large and coupled-like structure of the altar of a Buddhist temple may have reinforced the reverberation components and the table in the altar, which is called the "syumidan," may have decreased binaural coherence.

  16. What's that sound? Matches with auditory long-term memory induce gamma activity in human EEG.

    Science.gov (United States)

    Lenz, Daniel; Schadow, Jeanette; Thaerig, Stefanie; Busch, Niko A; Herrmann, Christoph S

    2007-04-01

    In recent years the cognitive functions of human gamma-band activity (30-100 Hz) advanced continuously into scientific focus. Not only bottom-up driven influences on 40 Hz activity have been observed, but also top-down processes seem to modulate responses in this frequency band. Among the various functions that have been related to gamma activity a pivotal role has been assigned to memory processes. Visual experiments suggested that gamma activity is involved in matching visual input to memory representations. Based on these findings we hypothesized that such memory related modulations of gamma activity exist in the auditory modality, as well. Thus, we chose environmental sounds for which subjects already had a long-term memory (LTM) representation and compared them to unknown, but physically similar sounds. 21 subjects had to classify sounds as 'recognized' or 'unrecognized', while EEG was recorded. Our data show significantly stronger activity in the induced gamma-band for recognized sounds in the time window between 300 and 500 ms after stimulus onset with a central topography. The results suggest that induced gamma-band activity reflects the matches between sounds and their representations in auditory LTM.

  17. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  18. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  19. Normal temporal binding window but no sound-induced flash illusion in people with one eye.

    Science.gov (United States)

    Moro, Stefania S; Steeves, Jennifer K E

    2018-04-19

    Integrating vision and hearing is an important way in which we process our rich sensory environment. Partial deprivation of the visual system from the loss of one eye early in life results in adaptive changes in the remaining senses (e.g., Hoover et al. in Exp Brain Res 216:565-74, 2012). The current study investigates whether losing one eye early in life impacts the temporal window in which audiovisual events are integrated and whether there is vulnerability to the sound-induced flash illusion. In Experiment 1, we measured the temporal binding window with a simultaneity judgement task where low-level auditory and visual stimuli were presented at different stimulus onset asynchronies. People with one eye did not differ in the width of their temporal binding window, but they took longer to make judgements compared to binocular viewing controls. In Experiment 2, we measured how many light flashes were perceived when a single flash was paired with multiple auditory beeps in close succession (sound induced flash illusion). Unlike controls, who perceived multiple light flashes with two, three or four beeps, people with one eye were not susceptible to the sound-induced flash illusion. In addition, they took no longer to respond compared to both binocular and monocular (eye-patched) viewing controls. Taken together, these results suggest that the lack of susceptibility to the sound-induced flash illusion in people with one eye cannot be accounted for by the width of the temporal binding window. These results provide evidence for adaptations in audiovisual integration due to the reduction of visual input from the loss of one eye early in life.

  20. Second Sound Measurement using SMD resistors to simulate Quench locations on the 704 MHZ Single-Cell Cavity at CERN

    CERN Document Server

    Liao, K; Ciapala, E; Junginger, T; Weingarten, W

    2012-01-01

    Oscillating Superleak Transducers (OSTs) containing flexible porous membranes are widely used to detect the so-called second sound temperature wave when a quench event occurs in a superconducting RF cavity. In principle, from the measured speed of this wave and the travel time between the quench event and several OSTs, the location of the quench sites can be derived by triangulation. Second sound behaviour has been simulated through different surface mount (SMD) resistors setups on a Superconducting Proton Linac (SPL) test cavity, to help understanding the underlying physics and improve quench localisation. Experiments are described that have been conducted to search for explanation of heat transfer during cavity quench that causes contradictory triangulation results.

  1. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    NARCIS (Netherlands)

    Colizoli, O.; Murre, J.M.J.; Rouw, R.

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the

  2. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  3. Characterization of Underwater Sounds Produced by Trailing Suction Hopper Dredges During Sand Mining and Pump-out Operations

    Science.gov (United States)

    2014-03-01

    machinery itself, such as winches, generators, thrusters and particularly propeller-induced cavitation ; and 5) sounds associated with the off-loading of...dredges were working concurrently. This is not surprising, given that cavitation (propeller noise) contributed the most to the overall sound field. If...in Cook Inlet, Alaska (an area known for high hydrodynamic flow conditions). Their RLs ranged from 95- 120 dB at eight locations. Highest RLs were

  4. A Geometrical Method for Sound-Hole Size and Location Enhancement in Lute Family Musical Instruments: The Golden Method

    Directory of Open Access Journals (Sweden)

    Soheil Jafari

    2017-11-01

    Full Text Available This paper presents a new analytical approach, the Golden Method, to enhance sound-hole size and location in musical instruments of the lute family in order to obtain better sound damping characteristics based on the concept of the golden ratio and the instrument geometry. The main objective of the paper is to increase the capability of lute family musical instruments in keeping a note for a certain time at a certain level to enhance the instruments’ orchestral characteristics. For this purpose, a geometry-based analytical method, the Golden Method is first described in detail in an itemized feature. A new musical instrument is then developed and tested to confirm the ability of the Golden Method in optimizing the acoustical characteristics of musical instruments from a damping point of view by designing the modified sound-hole. Finally, the new-developed instrument is tested, and the obtained results are compared with those of two well-known instruments to confirm the effectiveness of the proposed method. The experimental results show that the suggested method is able to increase the sound damping time by at least 2.4% without affecting the frequency response function and other acoustic characteristics of the instrument. This methodology could be used as the first step in future studies on design, optimization and evaluation of musical instruments of the lute family (e.g., lute, oud, barbat, mandolin, setar, and etc..

  5. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    Science.gov (United States)

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were

  6. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    Energy Technology Data Exchange (ETDEWEB)

    Perelomova, Anna [Gdansk University of Technology, Faculty of Applied Physics and Mathematics, ul. Narutowicza 11/12, 80-952 Gdansk (Poland)]. E-mail: anpe@mif.pg.gda.pl

    2006-08-28

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,{rho}) and caloric e(p,{rho}) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  7. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    Science.gov (United States)

    Perelomova, Anna

    2006-08-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  8. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    International Nuclear Information System (INIS)

    Perelomova, Anna

    2006-01-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed

  9. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  10. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  11. Reading drift in flow rate sensors caused by steady sound waves

    International Nuclear Information System (INIS)

    Maximiano, Celso; Nieble, Marcio D.; Migliavacca, Sylvana C.P.; Silva, Eduardo R.F.

    1995-01-01

    The use of thermal sensors very common for the measurement of small flows of gases. In this kind of sensor a little tube forming a bypass is heated symmetrically, then the temperature distribution in the tube modifies with the mass flow along it. When a stationary wave appears in the principal tube it causes an oscillation of pressure around the average value. The sensor, located between two points of the principal tube, indicates not only the principal mass flow, but also that one caused by the difference of pressure induced by the sound wave. When the gas flows at low pressures the equipment indicates a value that do not correspond to the real. Tests and essays were realized by generating a sound wave in the principal tube, without mass flow, and the sensor detected flux. In order to solve this problem a wave-damper was constructed, installed and tested in the system and it worked satisfactory eliminating with efficiency the sound wave. (author). 2 refs., 3 figs

  12. Hearing visuo-tactile synchrony - Sound-induced proprioceptive drift in the invisible hand illusion.

    Science.gov (United States)

    Darnai, Gergely; Szolcsányi, Tibor; Hegedüs, Gábor; Kincses, Péter; Kállai, János; Kovács, Márton; Simon, Eszter; Nagy, Zsófia; Janszky, József

    2017-02-01

    The rubber hand illusion (RHI) and its variant the invisible hand illusion (IHI) are useful for investigating multisensory aspects of bodily self-consciousness. Here, we explored whether auditory conditioning during an RHI could enhance the trisensory visuo-tactile-proprioceptive interaction underlying the IHI. Our paradigm comprised of an IHI session that was followed by an RHI session and another IHI session. The IHI sessions had two parts presented in counterbalanced order. One part was conducted in silence, whereas the other part was conducted on the backdrop of metronome beats that occurred in synchrony with the brush movements used for the induction of the illusion. In a first experiment, the RHI session also involved metronome beats and was aimed at creating an associative memory between the brush stroking of a rubber hand and the sounds. An analysis of IHI sessions showed that the participants' perceived hand position drifted more towards the body-midline in the metronome relative to the silent condition without any sound-related session differences. Thus, the sounds, but not the auditory RHI conditioning, influenced the IHI. In a second experiment, the RHI session was conducted without metronome beats. This confirmed the conditioning-independent presence of sound-induced proprioceptive drift in the IHI. Together, these findings show that the influence of visuo-tactile integration on proprioceptive updating is modifiable by irrelevant auditory cues merely through the temporal correspondence between the visuo-tactile and auditory events. © 2016 The British Psychological Society.

  13. Urban sound energy reduction by means of sound barriers

    Science.gov (United States)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  14. Urban sound energy reduction by means of sound barriers

    Directory of Open Access Journals (Sweden)

    Iordache Vlad

    2018-01-01

    Full Text Available In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  15. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    Directory of Open Access Journals (Sweden)

    Olympia eColizoli

    2013-10-01

    Full Text Available Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of nonlinguistic sounds induce the experience of taste, smell and physical sensations for SC. SC’s lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory, taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC’s performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC’s synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (tasty vs. tasteless words. Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC’s synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC’s synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli could be differentiated based on patterns of brain activity.

  16. Influence of sound source location on the behavior and physiology of the precedence effect in cats.

    Science.gov (United States)

    Dent, Micheal L; Tollin, Daniel J; Yin, Tom C T

    2009-08-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from approximately 0.4 to 10 ms, summing localization for |ISDs| 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space.

  17. Suppression of grasshopper sound production by nitric oxide-releasing neurons of the central complex

    Science.gov (United States)

    Weinrich, Anja; Kunst, Michael; Wirmer, Andrea; Holstein, Gay R.

    2008-01-01

    The central complex of acridid grasshoppers integrates sensory information pertinent to reproduction-related acoustic communication. Activation of nitric oxide (NO)/cyclic GMP-signaling by injection of NO donors into the central complex of restrained Chorthippus biguttulus females suppresses muscarine-stimulated sound production. In contrast, sound production is released by aminoguanidine (AG)-mediated inhibition of nitric oxide synthase (NOS) in the central body, suggesting a basal release of NO that suppresses singing in this situation. Using anti-citrulline immunocytochemistry to detect recent NO production, subtypes of columnar neurons with somata located in the pars intercerebralis and tangential neurons with somata in the ventro-median protocerebrum were distinctly labeled. Their arborizations in the central body upper division overlap with expression patterns for NOS and with the site of injection where NO donors suppress sound production. Systemic application of AG increases the responsiveness of unrestrained females to male calling songs. Identical treatment with the NOS inhibitor that increased male song-stimulated sound production in females induced a marked reduction of citrulline accumulation in central complex columnar and tangential neurons. We conclude that behavioral situations that are unfavorable for sound production (like being restrained) activate NOS-expressing central body neurons to release NO and elevate the behavioral threshold for sound production in female grasshoppers. PMID:18574586

  18. Sound Levels in East Texas Schools.

    Science.gov (United States)

    Turner, Aaron Lynn

    A survey of sound levels was taken in several Texas schools to determine the amount of noise and sound present by size of class, type of activity, location of building, and the presence of air conditioning and large amounts of glass. The data indicate that class size and relative amounts of glass have no significant bearing on the production of…

  19. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  20. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  1. Sound pressure distribution within natural and artificial human ear canals: forward stimulation.

    Science.gov (United States)

    Ravicz, Michael E; Tao Cheng, Jeffrey; Rosowski, John J

    2014-12-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5-2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11-16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC.

  2. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  3. Hearing with an atympanic ear: good vibration and poor sound-pressure detection in the royal python, Python regius

    DEFF Research Database (Denmark)

    Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian

    2012-01-01

    are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity...... at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound...... pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons...

  4. Effect of pitch–space correspondence on sound-induced visual motion perception

    NARCIS (Netherlands)

    Hidaka, Souta; Teramoto, Wataru; Keetels, Mirjam; Vroomen, J.H.M.

    2013-01-01

    The brain tends to associate specific features of stimuli across sensory modalities. The pitch of a sound is for example associated with spatial elevation such that higher-pitched sounds are felt as being “up” in space and lower-pitched sounds as being “down.” Here we investigated whether changes in

  5. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  6. Electromagnetic Sampo soundings at Olkiluoto in 2007

    International Nuclear Information System (INIS)

    Korhonen, K.; Lehtimaeki, J.

    2007-11-01

    The Geological Survey of Finland (GTK) carried out a Sampo Gefinex 400S frequency domain electromagnetic (EM) survey in the central part of the eastern Olkiluoto island. The survey comprised a total of 408 soundings; 134 of these were measurements of EM noise. The goal of the survey was to supplement previously performed soundings. The measurements of EM noise were used to analyse the influence of power lines on the soundings. A statistically significant correlation was found between EM noise and the distance between the receiver and the high-voltage power line located northeast of the research area. The high-voltage power line exerted a considerable influence on the soundings. Numerical modelling was used to evaluate the effect of a dipping layer on the interpretation of Sampo soundings, which is based on the 1-D layered earth model. The results indicate that Sampo interpretation is robust even in the case of a dipping layer, assuming that the dip of the layer is not steep, and both the transmitter and receiver are located above the layer. The interpretations of the soundings indicate three conducting layers. There appear to be two layers of significant conductivity above the depth of 600 m. These layers may be indications of sulphide and/or graphite rich layers. Furthermore, a deeper conducting layer below the depth of 600 m was also indicated by the interpretations. This layer may indicate deep saline groundwater. (orig.)

  7. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  8. Generation of the vorticity mode by sound in a Bingham plastic

    Science.gov (United States)

    Perelomova, Anna; Wojda, Pawel

    2011-10-01

    This study investigates interaction between acoustic and non-acoustic modes, such as vorticity mode, in some class of a non-newtonian fluid called Bingham plastic. The instantaneous equations describing interaction between different modes are derived. The attention is paid to the nonlinear effects in the field of intense sound. The resulting equations which describe dynamics of both sound and the vorticity mode apply to both periodic and aperiodic sound of any waveform. They use only instantaneous quantities and do not imply averaging over the sound period. The theory is illustrated by an example of acoustic force of vorticity induced in the field of a Gaussian sound beam. Some unusual peculiarities in both sound and the vorticity induced in its field as compared to a newtonian fluid, are discovered.

  9. Irrelevant sound disrupts speech production: exploring the relationship between short-term memory and experimentally induced slips of the tongue.

    Science.gov (United States)

    Saito, Satoru; Baddeley, Alan

    2004-10-01

    To explore the relationship between short-term memory and speech production, we developed a speech error induction technique. The technique, which was adapted from a Japanese word game, exposed participants to an auditory distractor word immediately before the utterance of a target word. In Experiment 1, the distractor words that were phonologically similar to the target word led to a greater number of errors in speaking the target than did the dissimilar distractor words. Furthermore, the speech error scores were significantly correlated with memory span scores. In Experiment 2, memory span scores were again correlated with the rate of the speech errors that were induced from the task-irrelevant speech sounds. Experiment 3 showed a strong irrelevant-sound effect in the serial recall of nonwords. The magnitude of the irrelevant-sound effects was not affected by phonological similarity between the to-be-remembered nonwords and the irrelevant-sound materials. Analysis of recall errors in Experiment 3 also suggested that there were no essential differences in recall error patterns between the dissimilar and similar irrelevant-sound conditions. We proposed two different underlying mechanisms in immediate memory, one operating via the phonological short-term memory store and the other via the processes underpinning speech production.

  10. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    Science.gov (United States)

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  11. Propagation of Sound in a Bose-Einstein Condensate

    International Nuclear Information System (INIS)

    Andrews, M.R.; Kurn, D.M.; Miesner, H.; Durfee, D.S.; Townsend, C.G.; Inouye, S.; Ketterle, W.

    1997-01-01

    Sound propagation has been studied in a magnetically trapped dilute Bose-Einstein condensate. Localized excitations were induced by suddenly modifying the trapping potential using the optical dipole force of a focused laser beam. The resulting propagation of sound was observed using a novel technique, rapid sequencing of nondestructive phase-contrast images. The speed of sound was determined as a function of density and found to be consistent with Bogoliubov theory. This method may generally be used to observe high-lying modes and perhaps second sound. copyright 1997 The American Physical Society

  12. THE MODULATED SOUNDS MADE BY THE TSETSE FLY ...

    African Journals Online (AJOL)

    . The females arrive later. The question is, how do they locate the host if they are more sedentary? Sound-motivation and -location is apparently fairly common in the insect world (Frings. & Frings 1958: 87-106). The purpose of the present ...

  13. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Initial Results from Lunar Electromagnetic Sounding with ARTEMIS

    Science.gov (United States)

    Fuqua, H.; Fatemi, S.; Poppe, A. R.; Delory, G. T.; Grimm, R. E.; De Pater, I.

    2016-12-01

    Electromagnetic Sounding constrains conducting layers of the lunar interior by observing variations in the Interplanetary Magnetic Field. Here, we focus our analysis on the time domain transfer function method locating transient events observed by two magnetometers near the Moon. We analyze ARTEMIS and Apollo magnetometer data. This analysis assumes the induced field responds undisturbed in a vacuum. In actuality, the dynamic plasma environment interacts with the induced field. Our models indicate distortion but not confinement occurs in the nightside wake cavity. Moreover, within the deep wake, near-vacuum region, distortion of the induced dipole fields due to the interaction with the wake is minimal depending on the magnitude of the induced field, the geometry of the upstream fields, and the upstream plasma parameters such as particle densities, solar wind velocity, and temperatures. Our results indicate the assumption of a vacuum dipolar response is reasonable within this minimally disturbed zone. We then interpret the ATEMIS magnetic field signal through a geophysical forward model capturing the induced response based on prescribed electrical conductivity models. We demonstrate our forward model passes benchmarking analyses and solves the magnetic induction response for any input signal as well as any 2 or 3 dimensional conductivity profile. We locate data windows according to the following criteria: (1) probe locations such that the wake probe is within 500km altitude within the wake cavity and minimally disturbed zone, and the second probe is in the free streaming solar wind; (2) a transient event consisting of an abrupt change in the magnetic field occurs enabling the observation of induction; (3) cross correlation analysis reveals the magnetic field signals are well correlated between the two probes and distances observed. Here we present initial ARTEMIS results providing further insight into the lunar interior structure. This method and modeling results

  15. Noise-induced hearing loss induces loudness intolerance in a rat Active Sound Avoidance Paradigm (ASAP).

    Science.gov (United States)

    Manohar, Senthilvelan; Spoth, Jaclyn; Radziwon, Kelly; Auerbach, Benjamin D; Salvi, Richard

    2017-09-01

    Hyperacusis is a loudness hypersensitivity disorder in which moderate-intensity sounds are perceived as extremely loud, aversive and/or painful. To assess the aversive nature of sounds, we developed an Active Sound Avoidance Paradigm (ASAP) in which rats altered their place preference in a Light/Dark shuttle box in response to sound. When no sound (NS) was present, rats spent more than 95% of the time in the Dark Box versus the transparent Light Box. However, when a 60 or 90 dB SPL noise (2-20 kHz, 2-8 kHz, or 16-20 kHz bandwidth) was presented in the Dark Box, the rats'' preference for the Dark Box significantly decreased. Percent time in the dark decreased as sound intensity in the Dark Box increased from 60 dB to 90 dB SPL. Interestingly, the magnitude of the decrease was not a monotonic function of intensity for the 16-20 kHz noise and not related to the bandwidth of the 2-20 kHz and 2-8 kHz noise bands, suggesting that sound avoidance is not solely dependent on loudness but the aversive quality of the noise as well. Afterwards, we exposed the rats for 28 days to a 16-20 kHz noise at 102 dB SPL; this exposure produced a 30-40 dB permanent threshold shift at 16 and 32 kHz. Following the noise exposure, the rats were then retested on the ASAP paradigm. High-frequency hearing loss did not alter Dark Box preference in the no-sound condition. However, when the 2-20 kHz or 2-8 kHz noise was presented at 60 or 90 dB SPL, the rats avoided the Dark Box significantly more than they did before the exposure, indicating these two noise bands with energy below the region of hearing loss were perceived as more aversive. In contrast, when the 16-20 kHz noise was presented at 60 or 90 dB SPL, the rats remained in the Dark Box presumably because the high-frequency hearing loss made 16-20 kHz noise less audible and less aversive. These results indicate that when rats develop a high-frequency hearing loss, they become less tolerant of low frequency noise, i

  16. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  17. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  18. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2010-01-01

    We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions, while soundscapes reproduce the characteristic soundmarks...... as well as self-induced interactive sounds simulated using physical models. Results show that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment....

  19. Time-domain electromagnetic soundings collected in Dawson County, Nebraska, 2007-09

    Science.gov (United States)

    Payne, Jason; Teeple, Andrew

    2011-01-01

    Between April 2007 and November 2009, the U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, collected time-domain electro-magnetic (TDEM) soundings at 14 locations in Dawson County, Nebraska. The TDEM soundings provide information pertaining to the hydrogeology at each of 23 sites at the 14 locations; 30 TDEM surface geophysical soundings were collected at the 14 locations to develop smooth and layered-earth resistivity models of the subsurface at each site. The soundings yield estimates of subsurface electrical resistivity; variations in subsurface electrical resistivity can be correlated with hydrogeologic and stratigraphic units. Results from each sounding were used to calculate resistivity to depths of approximately 90-130 meters (depending on loop size) below the land surface. Geonics Protem 47 and 57 systems, as well as the Alpha Geoscience TerraTEM, were used to collect the TDEM soundings (voltage data from which resistivity is calculated). For each sounding, voltage data were averaged and evaluated statistically before inversion (inverse modeling). Inverse modeling is the process of creating an estimate of the true distribution of subsurface resistivity from the mea-sured apparent resistivity obtained from TDEM soundings. Smooth and layered-earth models were generated for each sounding. A smooth model is a vertical delineation of calculated apparent resistivity that represents a non-unique estimate of the true resistivity. Ridge regression (Interpex Limited, 1996) was used by the inversion software in a series of iterations to create a smooth model consisting of 24-30 layers for each sounding site. Layered-earth models were then generated based on results of smooth modeling. The layered-earth models are simplified (generally 1 to 6 layers) to represent geologic units with depth. Throughout the area, the layered-earth models range from 2 to 4 layers, depending on observed inflections in the raw data and smooth model

  20. Location coding by opponent neural populations in the auditory cortex.

    Directory of Open Access Journals (Sweden)

    G Christopher Stecker

    2005-03-01

    Full Text Available Although the auditory cortex plays a necessary role in sound localization, physiological investigations in the cortex reveal inhomogeneous sampling of auditory space that is difficult to reconcile with localization behavior under the assumption of local spatial coding. Most neurons respond maximally to sounds located far to the left or right side, with few neurons tuned to the frontal midline. Paradoxically, psychophysical studies show optimal spatial acuity across the frontal midline. In this paper, we revisit the problem of inhomogeneous spatial sampling in three fields of cat auditory cortex. In each field, we confirm that neural responses tend to be greatest for lateral positions, but show the greatest modulation for near-midline source locations. Moreover, identification of source locations based on cortical responses shows sharp discrimination of left from right but relatively inaccurate discrimination of locations within each half of space. Motivated by these findings, we explore an opponent-process theory in which sound-source locations are represented by differences in the activity of two broadly tuned channels formed by contra- and ipsilaterally preferring neurons. Finally, we demonstrate a simple model, based on spike-count differences across cortical populations, that provides bias-free, level-invariant localization-and thus also a solution to the "binding problem" of associating spatial information with other nonspatial attributes of sounds.

  1. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  2. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  3. Sound Scattering and Its Reduction by a Janus Sphere Type

    Directory of Open Access Journals (Sweden)

    Deliya Kim

    2014-01-01

    Full Text Available Sound scattering by a Janus sphere type is considered. The sphere has two surface zones: a soft surface of zero acoustic impedance and a hard surface of infinite acoustic impedance. The zones are arranged such that axisymmetry of the sound field is preserved. The equivalent source method is used to compute the sound field. It is shown that, by varying the sizes of the soft and hard zones on the sphere, a significant reduction can be achieved in the scattered acoustic power and upstream directivity when the sphere is near a free surface and its soft zone faces the incoming wave and vice versa for a hard ground. In both cases the size of the sphere’s hard zone is much larger than that of its soft zone. The boundary location between the two zones coincides with the location of a zero pressure line of the incoming standing sound wave, thus masking the sphere within the sound field reflected by the free surface or the hard ground. The reduction in the scattered acoustic power diminishes when the sphere is placed in free space. Variations of the scattered acoustic power and directivity with the sound frequency are also given and discussed.

  4. How to take absorptive surfaces into account when designing outdoor sound reinforcement systems

    DEFF Research Database (Denmark)

    Rasmussen, Karsten bo

    1996-01-01

    When sound reinforcement systems are used outdoors, absorptive surfaces are usually present along the propagation path of the sound. This may lead to a very significant colouration of the spectrum received by the audience. The colouration depends on the location and directivity of the loudspeaker......, the nature of the absorptive surface (eg grass) and the location of the audience. It is discussed how this effect may be calculated and numerical examples are shown. The results show a significant colouration and attenuation of the sound due to grass-covered surfaces....

  5. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  6. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  7. Hearing abilities and sound reception of broadband sounds in an adult Risso's dolphin (Grampus griseus).

    Science.gov (United States)

    Mooney, T Aran; Yang, Wei-Cheng; Yu, Hsin-Yi; Ketten, Darlene R; Jen, I-Fan

    2015-08-01

    While odontocetes do not have an external pinna that guides sound to the middle ear, they are considered to receive sound through specialized regions of the head and lower jaw. Yet odontocetes differ in the shape of the lower jaw suggesting that hearing pathways may vary between species, potentially influencing hearing directionality and noise impacts. This work measured the audiogram and received sensitivity of a Risso's dolphin (Grampus griseus) in an effort to comparatively examine how this species receives sound. Jaw hearing thresholds were lowest (most sensitive) at two locations along the anterior, midline region of the lower jaw (the lower jaw tip and anterior part of the throat). Responses were similarly low along a more posterior region of the lower mandible, considered the area of best hearing in bottlenose dolphins. Left- and right-side differences were also noted suggesting possible left-right asymmetries in sound reception or differences in ear sensitivities. The results indicate best hearing pathways may vary between the Risso's dolphin and other odontocetes measured. This animal received sound well, supporting a proposed throat pathway. For Risso's dolphins in particular, good ventral hearing would support their acoustic ecology by facilitating echo-detection from their proposed downward oriented echolocation beam.

  8. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  9. Local Mechanisms for Loud Sound-Enhanced Aminoglycoside Entry into Outer Hair Cells

    Directory of Open Access Journals (Sweden)

    Hongzhe eLi

    2015-04-01

    Full Text Available Loud sound exposure exacerbates aminoglycoside ototoxicity, increasing the risk of permanent hearing loss and degrading the quality of life in affected individuals. We previously reported that loud sound exposure induces temporary threshold shifts (TTS and enhances uptake of aminoglycosides, like gentamicin, by cochlear outer hair cells (OHCs. Here, we explore mechanisms by which loud sound exposure and TTS could increase aminoglycoside uptake by OHCs that may underlie this form of ototoxic synergy.Mice were exposed to loud sound levels to induce TTS, and received fluorescently-tagged gentamicin (GTTR for 30 minutes prior to fixation. The degree of TTS was assessed by comparing auditory brainstem responses before and after loud sound exposure. The number of tip links, which gate the GTTR-permeant mechanoelectrical transducer (MET channels, was determined in OHC bundles, with or without exposure to loud sound, using scanning electron microscopy.We found wide-band noise (WBN levels that induce TTS also enhance OHC uptake of GTTR compared to OHCs in control cochleae. In cochlear regions with TTS, the increase in OHC uptake of GTTR was significantly greater than in adjacent pillar cells. In control mice, we identified stereociliary tip links at ~50% of potential positions in OHC bundles. However, the number of OHC tip links was significantly reduced in mice that received WBN at levels capable of inducing TTS.These data suggest that GTTR uptake by OHCs during TTS occurs by increased permeation of surviving, mechanically-gated MET channels, and/or non-MET aminoglycoside-permeant channels activated following loud sound exposure. Loss of tip links would hyperpolarize hair cells and potentially increase drug uptake via aminoglycoside-permeant channels expressed by hair cells. The effect of TTS on aminoglycoside-permeant channel kinetics will shed new light on the mechanisms of loud sound-enhanced aminoglycoside uptake, and consequently on ototoxic

  10. Low-frequency sound affects active micromechanics in the human inner ear

    Science.gov (United States)

    Kugler, Kathrin; Wiegrebe, Lutz; Grothe, Benedikt; Kössl, Manfred; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2014-01-01

    Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing. PMID:26064536

  11. Physics of thermo-acoustic sound generation

    Science.gov (United States)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  12. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  13. 78 FR 17224 - Environmental Impact Statement; Proposed South Puget Sound Prairie Habitat Conservation Plan...

    Science.gov (United States)

    2013-03-20

    ... sizable portion of South Puget Sound Prairie habitat is located in the urban-rural interface and in the...-FF01E00000] Environmental Impact Statement; Proposed South Puget Sound Prairie Habitat Conservation Plan... permit application would be associated the South Puget Sound Prairie Habitat Conservation Plan (Prairie...

  14. Hearing with an atympanic ear: good vibration and poor sound-pressure detection in the royal python, Python regius.

    Science.gov (United States)

    Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian; Madsen, Peter Teglberg

    2012-01-15

    Snakes lack both an outer ear and a tympanic middle ear, which in most tetrapods provide impedance matching between the air and inner ear fluids and hence improve pressure hearing in air. Snakes would therefore be expected to have very poor pressure hearing and generally be insensitive to airborne sound, whereas the connection of the middle ear bone to the jaw bones in snakes should confer acute sensitivity to substrate vibrations. Some studies have nevertheless claimed that snakes are quite sensitive to both vibration and sound pressure. Here we test the two hypotheses that: (1) snakes are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons, and possibly all snakes, lost effective pressure hearing with the complete reduction of a functional outer and middle ear, but have an acute vibration sensitivity that may be used for communication and detection of predators and prey.

  15. Identification of impact force acting on composite laminated plates using the radiated sound measured with microphones

    Science.gov (United States)

    Atobe, Satoshi; Nonami, Shunsuke; Hu, Ning; Fukunaga, Hisao

    2017-09-01

    Foreign object impact events are serious threats to composite laminates because impact damage leads to significant degradation of the mechanical properties of the structure. Identification of the location and force history of the impact that was applied to the structure can provide useful information for assessing the structural integrity. This study proposes a method for identifying impact forces acting on CFRP (carbon fiber reinforced plastic) laminated plates on the basis of the sound radiated from the impacted structure. Identification of the impact location and force history is performed using the sound pressure measured with microphones. To devise a method for identifying the impact location from the difference in the arrival times of the sound wave detected with the microphones, the propagation path of the sound wave from the impacted point to the sensor is examined. For the identification of the force history, an experimentally constructed transfer matrix is employed to relate the force history to the corresponding sound pressure. To verify the validity of the proposed method, impact tests are conducted by using a CFRP cross-ply laminate as the specimen, and an impulse hammer as the impactor. The experimental results confirm the validity of the present method for identifying the impact location from the arrival time of the sound wave detected with the microphones. Moreover, the results of force history identification show the feasibility of identifying the force history accurately from the measured sound pressure using the experimental transfer matrix.

  16. Electrical resistivity sounding to study water content distribution in heterogeneous soils

    Science.gov (United States)

    Electrical resistivity (ER) sounding is increasingly being used as non-invasive technique to reveal and map soil heterogeneity. The objective of this work was to assess ER sounding applicability to study soil water distribution in spatially heterogeneous soils. The 30x30-m study plot was located at ...

  17. A unified approach for the spatial enhancement of sound

    Science.gov (United States)

    Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann

    2005-09-01

    This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.

  18. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  19. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  20. Sound taxation? On the use of self-declared value

    NARCIS (Netherlands)

    Haan, Marco A.; Heijnen, Pim; Schoonbeek, Lambert; Toolsema-Veldman, Linda

    In the 16th century, foreign ships passing through the Sound had to pay ad valorem taxes, known as the Sound Dues. To give skippers an incentive to declare the true value of their cargo, the Danish Crown reserved the right to purchase it at the declared value. We show that this rule does not induce

  1. On the role of sound in the strong Langmuir turbulence

    International Nuclear Information System (INIS)

    Malkin, V.M.

    1989-01-01

    The main directions in the precision of the theory of strong Langmuir turbulence caused by the necessity of account of sound waves in plasma are preseted. In particular the effect of conversion of short-wave modulations in Langmuir waves induced by sound waves, are briefly described. 8 refs

  2. A standing posture is associated with increased susceptibility to the sound-induced flash illusion in fall-prone older adults.

    Science.gov (United States)

    Stapleton, John; Setti, Annalisa; Doheny, Emer P; Kenny, Rose Anne; Newell, Fiona N

    2014-02-01

    Recent research has provided evidence suggesting a link between inefficient processing of multisensory information and incidence of falling in older adults. Specifically, Setti et al. (Exp Brain Res 209:375-384, 2011) reported that older adults with a history of falling were more susceptible than their healthy, age-matched counterparts to the sound-induced flash illusion. Here, we investigated whether balance control in fall-prone older adults was directly associated with multisensory integration by testing susceptibility to the illusion under two postural conditions: sitting and standing. Whilst standing, fall-prone older adults had a greater body sway than the age-matched healthy older adults and their body sway increased when presented with the audio-visual illusory but not the audio-visual congruent conditions. We also found an increase in susceptibility to the sound-induced flash illusion during standing relative to sitting for fall-prone older adults only. Importantly, no performance differences were found across groups in either the unisensory or non-illusory multisensory conditions across the two postures. These results suggest an important link between multisensory integration and balance control in older adults and have important implications for understanding why some older adults are prone to falling.

  3. Comprehensive measures of sound exposures in cinemas using smart phones.

    Science.gov (United States)

    Huth, Markus E; Popelka, Gerald R; Blevins, Nikolas H

    2014-01-01

    Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on

  4. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  5. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  6. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  7. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  8. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  9. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  10. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  11. Melting along the Hugoniot and solid phase transition for Sn via sound velocity measurements

    Science.gov (United States)

    Song, Ping; Cai, Ling-cang; Tao, Tian-jiong; Yuan, Shuai; Chen, Hong; Huang, Jin; Zhao, Xin-wen; Wang, Xue-jun

    2016-11-01

    It is very important to determine the phase boundaries for materials with complex crystalline phase structures to construct their corresponding multi-phase equation of state. By measuring the sound velocity of Sn with different porosities, different shock-induced melting pressures along the solid-liquid phase boundary could be obtained. The incipient shock-induced melting of porous Sn samples with two different porosities occurred at a pressure of about 49.1 GPa for a porosity of 1.01 and 45.6 GPa for a porosity of 1.02, based on measurements of the sound velocity. The incipient shock-induced melting pressure of solid Sn was revised to 58.1 GPa using supplemental measurements of the sound velocity. Trivially, pores in Sn decreased the shock-induced melting pressure. Based on the measured longitudinal sound velocity data, a refined solid phase transition and the Hugoniot temperature-pressure curve's trend are discussed. No bcc phase transition occurs along the Hugoniot for porous Sn; further investigation is required to understand the implications of this finding.

  12. Sound analysis of a cup drum

    International Nuclear Information System (INIS)

    Kim, Kun ho

    2012-01-01

    The International Young Physicists’ Tournament (IYPT) is a worldwide tournament that evaluates a high-school student's ability to solve various physics conundrums that have not been fully resolved in the past. The research presented here is my solution to the cup drum problem. The physics behind a cup drum has never been explored or modelled. A cup drum is a musical instrument that can generate different frequencies and amplitudes depending on the location of a cup held upside-down over, on or under a water surface. The tapping sound of a cup drum can be divided into two components: standing waves and plate vibration. By individually researching the nature of these two sounds, I arrived at conclusions that could accurately predict the frequencies in most cases. When the drum is very close to the surface, qualitative explanations are given. In addition, I examined the trend of the tapping sound amplitude at various distances and qualitatively explained the experimental results. (paper)

  13. IST BENOGO (IST – 2001-39184) Deliverable 4.2.2: "Interactive" sound augmentation as room simulation

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    This document describes a special approach to room simulation. Sound created by the user’s own activity and interaction with the room and reflecting characteristics of the room, may support the feeling of presence. We pursue this hypothesis by 1) generating the sound of footsteps induced by the u......This document describes a special approach to room simulation. Sound created by the user’s own activity and interaction with the room and reflecting characteristics of the room, may support the feeling of presence. We pursue this hypothesis by 1) generating the sound of footsteps induced...... to ensure the desired effect....

  14. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  15. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  16. Variability in metagenomic samples from the Puget Sound: Relationship to temporal and anthropogenic impacts.

    Directory of Open Access Journals (Sweden)

    James C Wallace

    Full Text Available Whole-metagenome sequencing (WMS has emerged as a powerful tool to assess potential public health risks in marine environments by measuring changes in microbial community structure and function in uncultured bacteria. In addition to monitoring public health risks such as antibiotic resistance determinants, it is essential to measure predictors of microbial variation in order to identify natural versus anthropogenic factors as well as to evaluate reproducibility of metagenomic measurements.This study expands our previous metagenomic characterization of Puget Sound by sampling new nearshore environments including the Duwamish River, an EPA superfund site, and the Hood Canal, an area characterized by highly variable oxygen levels. We also resampled a wastewater treatment plant, nearshore and open ocean sites introducing a longitudinal component measuring seasonal and locational variations and establishing metagenomics sampling reproducibility. Microbial composition from samples collected in the open sound were highly similar within the same season and location across different years, while nearshore samples revealed multi-fold seasonal variation in microbial composition and diversity. Comparisons with recently sequenced predominant marine bacterial genomes helped provide much greater species level taxonomic detail compared to our previous study. Antibiotic resistance determinants and pollution and detoxification indicators largely grouped by location showing minor seasonal differences. Metal resistance, oxidative stress and detoxification systems showed no increase in samples proximal to an EPA superfund site indicating a lack of ecosystem adaptation to anthropogenic impacts. Taxonomic analysis of common sewage influent families showed a surprising similarity between wastewater treatment plant and open sound samples suggesting a low-level but pervasive sewage influent signature in Puget Sound surface waters. Our study shows reproducibility of

  17. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  18. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    Science.gov (United States)

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds

  19. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  20. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  1. Character, distribution, and ecological significance of storm wave-induced scour in Rhode Island Sound, USA

    Science.gov (United States)

    McMullen, Katherine Y.; Poppe, Lawrence J.; Parker, Castle E.

    2015-01-01

    Multibeam bathymetry, collected during NOAA hydrographic surveys in 2008 and 2009, is coupled with USGS data from sampling and photographic stations to map the seabed morphology and composition of Rhode Island Sound along the US Atlantic coast, and to provide information on sediment transport and benthic habitats. Patchworks of scour depressions cover large areas on seaward-facing slopes and bathymetric highs in the sound. These depressions average 0.5-0.8 m deep and occur in water depths reaching as much as 42 m. They have relatively steep well-defined sides and coarser-grained floors, and vary strongly in shape, size, and configuration. Some individual scour depressions have apparently expanded to combine with adjacent depressions, forming larger eroded areas that commonly contain outliers of the original seafloor sediments. Where cobbles and scattered boulders are present on the depression floors, the muddy Holocene sands have been completely removed and the winnowed relict Pleistocene deposits exposed. Low tidal-current velocities and the lack of obstacle marks suggest that bidirectional tidal currents alone are not capable of forming these features. These depressions are formed and maintained under high-energy shelf conditions owing to repetitive cyclic loading imposed by high-amplitude, long-period, storm-driven waves that reduce the effective shear strength of the sediment, cause resuspension, and expose the suspended sediments to erosion by wind-driven and tidal currents. Because epifauna dominate on gravel floors of the depressions and infauna are prevalent in the finer-grained Holocene deposits, it is concluded that the resultant close juxtaposition of silty sand-, sand-, and gravel-dependent communities promotes regional faunal complexity. These findings expand on earlier interpretations, documenting how storm wave-induced scour produces sorted bedforms that control much of the benthic geologic and biologic diversity in Rhode Island Sound.

  2. Sound field separation with cross measurement surfaces.

    Directory of Open Access Journals (Sweden)

    Jin Mao

    Full Text Available With conventional near-field acoustical holography, it is impossible to identify sound pressure when the coherent sound sources are located on the same side of the array. This paper proposes a solution, using cross measurement surfaces to separate the sources based on the equivalent source method. Each equivalent source surface is built in the center of the corresponding original source with a spherical surface. According to the different transfer matrices between equivalent sources and points on holographic surfaces, the weighting of each equivalent source from coherent sources can be obtained. Numerical and experimental studies have been performed to test the method. For the sound pressure including noise after separation in the experiment, the calculation accuracy can be improved by reconstructing the pressure with Tikhonov regularization and the L-curve method. On the whole, a single source can be effectively separated from coherent sources using cross measurement.

  3. Sound transmission reduction with intelligent panel systems

    Science.gov (United States)

    Fuller, Chris R.; Clark, Robert L.

    1992-01-01

    Experimental and theoretical investigations are performed of the use of intelligent panel systems to control the sound transmission and radiation. An intelligent structure is defined as a structural system with integrated actuators and sensors under the guidance of an adaptive, learning type controller. The system configuration is based on the Active Structural Acoustic Control (ASAC) concept where control inputs are applied directly to the structure to minimize an error quantity related to the radiated sound field. In this case multiple piezoelectric elements are employed as sensors. The importance of optimal shape and location is demonstrated to be of the same order of influence as increasing the number of channels of control.

  4. Prediction of the niche effect for single flat panels with or without attached sound absorbing materials.

    Science.gov (United States)

    Sgard, Franck; Atalla, Noureddine; Nélisse, Hugues

    2015-01-01

    The sound transmission loss (STL) of a test sample measured in sound transmission facilities is affected by the opening in which it is located. This is called the niche effect. This paper uses a modal approach to study the STL of a rectangular plate with or without an attached porous material located inside a box-shaped niche. The porous material is modeled as a limp equivalent fluid. The proposed model is validated by comparison with finite element/boundary element computations. Using a condensation of the pressure fields in the niche, the niche effect is interpreted in terms of a modification of the modal blocked pressure fields acting on the panel induced by the front cavity and by a modification of the radiation efficiency of the panel modes due to the presence of the back cavity. The modal approach is then used to investigate the impact of (1) the presence of a porous material attached to the panel on the niche effect and (2) the niche effect on the assessment of the porous material insertion loss. A simplified model for the porous material based on a transfer matrix approach is also proposed to predict the STL of the system and its validity is discussed.

  5. Sound Radiation of Aerodynamically Excited Flat Plates into Cavities

    Directory of Open Access Journals (Sweden)

    Johannes Osterziel

    2017-10-01

    Full Text Available Flow-induced vibrations and the sound radiation of flexible plate structures of different thickness mounted in a rigid plate are experimentally investigated. Therefore, flow properties and turbulent boundary layer parameters are determined through measurements with a hot-wire anemometer in an aeroacoustic wind tunnel. Furthermore, the excitation of the vibrating plate is examined by laser scanning vibrometry. To describe the sound radiation and the sound transmission of the flexible aluminium plates into cavities, a cuboid-shaped room with adjustable volume and 34 flush-mounted microphones is installed at the non flow-excited side of the aluminium plates. Results showed that the sound field inside the cavity is on the one hand dependent on the flow parameters and the plate thickness and on the other hand on the cavity volume which indirectly influences the level and the distribution of the sound pressure behind the flexible plate through different excited modes.

  6. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    Science.gov (United States)

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Multi-Century Record of Anthropogenic Impacts on an Urbanized Mesotidal Estuary: Salem Sound, MA

    Science.gov (United States)

    Salem, MA, located north of Boston, has a rich, well-documented history dating back to settlement in 1626 CE, but the associated anthropogenic impacts on Salem Sound are poorly constrained. This project utilized dated sediment cores from the sound to assess the proxy record of an...

  8. Learning about the Dynamic Sun through Sounds

    Science.gov (United States)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  9. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  10. Reading drift in flow rate sensors caused by steady sound waves; Desvios de leitura em sensores de vazao provocados por ondas sonoras estacionarias

    Energy Technology Data Exchange (ETDEWEB)

    Maximiano, Celso; Nieble, Marcio D. [Coordenadoria para Projetos Especiais (COPESP), Sao Paulo, SP (Brazil); Migliavacca, Sylvana C.P.; Silva, Eduardo R.F. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    1995-12-31

    The use of thermal sensors very common for the measurement of small flows of gases. In this kind of sensor a little tube forming a bypass is heated symmetrically, then the temperature distribution in the tube modifies with the mass flow along it. When a stationary wave appears in the principal tube it causes an oscillation of pressure around the average value. The sensor, located between two points of the principal tube, indicates not only the principal mass flow, but also that one caused by the difference of pressure induced by the sound wave. When the gas flows at low pressures the equipment indicates a value that do not correspond to the real. Tests and essays were realized by generating a sound wave in the principal tube, without mass flow, and the sensor detected flux. In order to solve this problem a wave-damper was constructed, installed and tested in the system and it worked satisfactory eliminating with efficiency the sound wave. (author). 2 refs., 3 figs.

  11. Acoustically induced transparency using Fano resonant periodic arrays

    KAUST Repository

    El-Amin, Mohamed

    2015-10-22

    A three-dimensional acoustic device, which supports Fano resonance and induced transparency in its response to an incident sound wave, is designed and fabricated. These effects are generated from the destructive interference of closely coupled one broad- and one narrow-band acoustic modes. The proposed design ensures excitation and interference of two spectrally close modes by locating a small pipe inside a wider and longer one. Indeed, numerical simulations and experiments demonstrate that this simple-to-fabricate structure can be used to generate Fano resonance as well as acoustically induced transparency with promising applications in sensing, cloaking, and imaging.

  12. Acoustically induced transparency using Fano resonant periodic arrays

    KAUST Repository

    El-Amin, Mohamed; Elayouch, A.; Farhat, Mohamed; Addouche, M.; Khelif, A.; Bagci, Hakan

    2015-01-01

    A three-dimensional acoustic device, which supports Fano resonance and induced transparency in its response to an incident sound wave, is designed and fabricated. These effects are generated from the destructive interference of closely coupled one broad- and one narrow-band acoustic modes. The proposed design ensures excitation and interference of two spectrally close modes by locating a small pipe inside a wider and longer one. Indeed, numerical simulations and experiments demonstrate that this simple-to-fabricate structure can be used to generate Fano resonance as well as acoustically induced transparency with promising applications in sensing, cloaking, and imaging.

  13. Sound induced activity in voice sensitive cortex predicts voice memory ability

    Directory of Open Access Journals (Sweden)

    Rebecca eWatson

    2012-04-01

    Full Text Available The ‘temporal voice areas’ (TVAs (Belin et al., 2000 of the human brain show greater neuronal activity in response to human voices than to other categories of nonvocal sounds. However, a direct link between TVA activity and voice perceptionbehaviour has not yet been established. Here we show that a functional magnetic resonance imaging (fMRI measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds whengeneral sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

  14. Is 1/f sound more effective than simple resting in reducing stress response?

    Science.gov (United States)

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  15. Reproduction of nearby sources by imposing true interaural differences on a sound field control approach

    DEFF Research Database (Denmark)

    Badajoz, Javier; Chang, Ji-ho; Agerkvist, Finn T.

    2015-01-01

    In anechoic conditions, the Interaural Level Difference (ILD) is the most significant auditory cue to judge the distance to a sound source located within 1 m of the listener's head. This is due to the unique characteristics of a point source in its near field, which result in exceptionally high...... as Pressure Matching (PM), and a binaural control technique. While PM aims at reproducing the incident sound field, the objective of the binaural control technique is to ensure a correct reproduction of interaural differences. The combination of these two approaches gives rise to the following features: (i......, distance dependent ILDs. When reproducing the sound field of sources located near the head with line or circular arrays of loudspeakers, the reproduced ILDs are generally lower than expected, due to physical limitations. This study presents an approach that combines a sound field reproduction method, known...

  16. Nonlinear effects in the propagation of shortwave transverse sound in pure superconductors

    International Nuclear Information System (INIS)

    Gal'perin, Y.

    1982-01-01

    Various mechanisms are analyzed which lead to nonlinear phenomena (e.g., the dependence of the absorption coefficient and of the velocity of sound on its intensity) in the propagation of transverse shortwave sound in pure superconductors (the wavelength of the sound being much less than the mean free path of the quasiparticles). It is shown that the basic mechanism, over a wide range of superconductor parameters and of the sound intensity, is the so-called momentum nonlinearity. The latter is due to the distortion (induced by the sound wave) of the quasimomentum distribution of resonant electrons interacting with the wave. The dependences of the absorption coefficient and of the sound velocity on its intensity and on the temperature are analyzed in the vicinity of the superconducting transition point. The feasibility of an experimental study of nonlinear acoustic phenomena in the case of transverse sound is considered

  17. Cross-modal and intra-modal binding between identity and location in spatial working memory: The identity of objects does not help recalling their locations.

    Science.gov (United States)

    Del Gatto, Claudia; Brunetti, Riccardo; Delogu, Franco

    2016-01-01

    In this study we tested incidental feature-to-location binding in a spatial task, both in unimodal and cross-modal conditions. In Experiment 1 we administered a computerised version of the Corsi Block-Tapping Task (CBTT) in three different conditions: the first one analogous to the original CBTT test; the second one in which locations were associated with unfamiliar images; the third one in which locations were associated with non-verbal sounds. Results showed no effect on performance by the addition of identity information. In Experiment 2, locations on the screen were associated with pitched sounds in two different conditions: one in which different pitches were randomly associated with locations and the other in which pitches were assigned to match the vertical position of the CBTT squares congruently with their frequencies. In Experiment 2 we found marginal evidence of a pitch facilitation effect in the spatial memory task. We ran a third experiment to test the same conditions of Experiment 2 with a within-subject design. Results of Experiment 3 did not confirm the pitch-location facilitation effect. We concluded that the identity of objects does not affect recalling their locations. We discuss our results within the framework of the debate about the mechanisms of "what" and "where" feature binding in working memory.

  18. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  19. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  20. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  1. Sound generating flames of a gas turbine burner observed by laser-induced fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Hubschmid, W; Inauen, A.; Bombach, R.; Kreutner, W.; Schenker, S.; Zajadatz, M. [Alstom (Switzerland); Motz, C. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland); Paschereit, C.O. [Alstom (Switzerland)

    2002-03-01

    We performed 2-D OH LIF measurements to investigate the sound emission of a gas turbine combustor. The measured LIF signal was averaged over pulses at constant phase of the dominant acoustic oscillation. A periodic variation in intensity and position of the signal is observed and it is related to the measured sound intensity. (author)

  2. Lung sound intensity in patients with emphysema and in normal subjects at standardised airflows.

    Science.gov (United States)

    Schreur, H J; Sterk, P J; Vanderschoot, J; van Klink, H C; van Vollenhoven, E; Dijkman, J H

    1992-01-01

    BACKGROUND: A common auscultatory finding in pulmonary emphysema is a reduction of lung sounds. This might be due to a reduction in the generation of sounds due to the accompanying airflow limitation or to poor transmission of sounds due to destruction of parenchyma. Lung sound intensity was investigated in normal and emphysematous subjects in relation to airflow. METHODS: Eight normal men (45-63 years, FEV1 79-126% predicted) and nine men with severe emphysema (50-70 years, FEV1 14-63% predicted) participated in the study. Emphysema was diagnosed according to pulmonary history, results of lung function tests, and radiographic criteria. All subjects underwent phonopneumography during standardised breathing manoeuvres between 0.5 and 2 1 below total lung capacity with inspiratory and expiratory target airflows of 2 and 1 l/s respectively during 50 seconds. The synchronous measurements included airflow at the mouth and lung volume changes, and lung sounds at four locations on the right chest wall. For each microphone airflow dependent power spectra were computed by using fast Fourier transformation. Lung sound intensity was expressed as log power (in dB) at 200 Hz at inspiratory flow rates of 1 and 2 l/s and at an expiratory flow rate of 1 l/s. RESULTS: Lung sound intensity was well repeatable on two separate days, the intraclass correlation coefficient ranging from 0.77 to 0.94 between the four microphones. The intensity was strongly influenced by microphone location and airflow. There was, however, no significant difference in lung sound intensity at any flow rate between the normal and the emphysema group. CONCLUSION: Airflow standardised lung sound intensity does not differ between normal and emphysematous subjects. This suggests that the auscultatory finding of diminished breath sounds during the regular physical examination in patients with emphysema is due predominantly to airflow limitation. Images PMID:1440459

  3. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  4. Maritime Protection of Critical Infrastructure Assets in the Campeche Sound

    National Research Council Canada - National Science Library

    Tiburcio, Felix M

    2005-01-01

    Following the 9/11 terrorist events in the United States the Mexican Navy developed strategies designed to prevent similar attacks on the strategic facilities located in the Campeche Sound in the Gulf of Mexico...

  5. SOUND FIELD SHIELDING BY FLAT ELASTIC LAYER AND THIN UNCLOSED SPHERICAL SHELL

    Directory of Open Access Journals (Sweden)

    G. Ch. Shushkevich

    2014-01-01

    Full Text Available An analytical solution of a boundary problem describing the process of penetration of a sound field of a spherical radiator located inside a thin unclosed spherical shell through a flat elastic layer is constructed. An influence of some parameters of the problem on the value of the attenuation coeffi-cient (screening of the sound field was studied by using a numerical simulation.

  6. Locating noise sources with a microphone array

    International Nuclear Information System (INIS)

    Bale, A.; Johnson, D.

    2010-01-01

    Noise pollution is one of the contributors to the public opposition of wind farms. Most of the noise produced by turbines is caused by the aerodynamic interactions between the turbine blades and the surrounding air. This poster presentation discussed a series of aeroacoustic tests conducted to account for the different in vortical structures caused by the rotation of the blades. Microphone arrays were used measure and locate the source of noise. A beam forming technique was used to measure the noise using an algorithm that identified a scanning grid on a plane where the source was thought to be located. It delayed each microphone's signal by the length of time required for the sound to travel from the scan position to each microphone, and accounted for the amplitudes according to the distance from the scan position to each microphone. Demonstration test cases were conducted using piezo buzzers attached to aluminum bars and mounted to the shaft of a DC motor that produced a rotational diameter of 0.95 meter. The buzzers were placed 1 meter from the array. Multiple sound sources at the same frequency were identified, and the moving sources were accurately measured and located. tabs., figs.

  7. Root phonotropism: Early signalling events following sound perception in Arabidopsis roots.

    Science.gov (United States)

    Rodrigo-Moreno, Ana; Bazihizina, Nadia; Azzarello, Elisa; Masi, Elisa; Tran, Daniel; Bouteau, François; Baluska, Frantisek; Mancuso, Stefano

    2017-11-01

    Sound is a fundamental form of energy and it has been suggested that plants can make use of acoustic cues to obtain information regarding their environments and alter and fine-tune their growth and development. Despite an increasing body of evidence indicating that it can influence plant growth and physiology, many questions concerning the effect of sound waves on plant growth and the underlying signalling mechanisms remains unknown. Here we show that in Arabidopsis thaliana, exposure to sound waves (200Hz) for 2 weeks induced positive phonotropism in roots, which grew towards to sound source. We found that sound waves triggered very quickly (within  minutes) an increase in cytosolic Ca 2+ , possibly mediated by an influx through plasma membrane and a release from internal stock. Sound waves likewise elicited rapid reactive oxygen species (ROS) production and K + efflux. Taken together these results suggest that changes in ion fluxes (Ca 2+ and K + ) and an increase in superoxide production are involved in sound perception in plants, as previously established in animals. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    Directory of Open Access Journals (Sweden)

    Nordahl Rolf

    2010-01-01

    Full Text Available We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users' actions, while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden. A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions, including both uni- and bimodal stimuli (auditory and visual. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show that subjects' motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment.

  9. Strings on a Violin: Location Dependence of Frequency Tuning in Active Dendrites.

    Science.gov (United States)

    Das, Anindita; Rathour, Rahul K; Narayanan, Rishikesh

    2017-01-01

    Strings on a violin are tuned to generate distinct sound frequencies in a manner that is firmly dependent on finger location along the fingerboard. Sound frequencies emerging from different violins could be very different based on their architecture, the nature of strings and their tuning. Analogously, active neuronal dendrites, dendrites endowed with active channel conductances, are tuned to distinct input frequencies in a manner that is dependent on the dendritic location of the synaptic inputs. Further, disparate channel expression profiles and differences in morphological characteristics could result in dendrites on different neurons of the same subtype tuned to distinct frequency ranges. Alternately, similar location-dependence along dendritic structures could be achieved through disparate combinations of channel profiles and morphological characteristics, leading to degeneracy in active dendritic spectral tuning. Akin to strings on a violin being tuned to different frequencies than those on a viola or a cello, different neuronal subtypes exhibit distinct channel profiles and disparate morphological characteristics endowing each neuronal subtype with unique location-dependent frequency selectivity. Finally, similar to the tunability of musical instruments to elicit distinct location-dependent sounds, neuronal frequency selectivity and its location-dependence are tunable through activity-dependent plasticity of ion channels and morphology. In this morceau, we explore the origins of neuronal frequency selectivity, and survey the literature on the mechanisms behind the emergence of location-dependence in distinct forms of frequency tuning. As a coda to this composition, we present some future directions for this exciting convergence of biophysical mechanisms that endow a neuron with frequency multiplexing capabilities.

  10. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  11. Fast detection of unexpected sound intensity decrements as revealed by human evoked potentials.

    Directory of Open Access Journals (Sweden)

    Heike Althen

    Full Text Available The detection of deviant sounds is a crucial function of the auditory system and is reflected by the automatically elicited mismatch negativity (MMN, an auditory evoked potential at 100 to 250 ms from stimulus onset. It has recently been shown that rarely occurring frequency and location deviants in an oddball paradigm trigger a more negative response than standard sounds at very early latencies in the middle latency response of the human auditory evoked potential. This fast and early ability of the auditory system is corroborated by the finding of neurons in the animal auditory cortex and subcortical structures, which restore their adapted responsiveness to standard sounds, when a rare change in a sound feature occurs. In this study, we investigated whether the detection of intensity deviants is also reflected at shorter latencies than those of the MMN. Auditory evoked potentials in response to click sounds were analyzed regarding the auditory brain stem response, the middle latency response (MLR and the MMN. Rare stimuli with a lower intensity level than standard stimuli elicited (in addition to an MMN a more negative potential in the MLR at the transition from the Na to the Pa component at circa 24 ms from stimulus onset. This finding, together with the studies about frequency and location changes, suggests that the early automatic detection of deviant sounds in an oddball paradigm is a general property of the auditory system.

  12. Coupled simulation of meteorological parameters and sound intensity in a narrow valley

    Energy Technology Data Exchange (ETDEWEB)

    Heimann, D. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Gross, G. [Hannover Univ. (Germany). Inst. fuer Meteorologie und Klimatologie

    1997-07-01

    A meteorological mesoscale model is used to simulate the inhomogeneous distribution of temperature and the appertaining development of thermal wind systems in a narrow two-dimensional valley during the course of a cloud-free day. A simple sound particle model takes up the simulated meteorological fields and calculates the propagation of noise which originates from a line source at one of the slopes of this valley. The coupled modeling system ensures consistency of topography, meteorological parameters and the sound field. The temporal behaviour of the sound intensity level across the valley is examined. It is only governed by the time-dependent meteorology. The results show remarkable variations of the sound intensity during the course of a day depending on the location in the valley. (orig.) 23 refs.

  13. Active Noise Control Experiments using Sound Energy Flu

    Science.gov (United States)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  14. Location audio simplified capturing your audio and your audience

    CERN Document Server

    Miles, Dean

    2014-01-01

    From the basics of using camera, handheld, lavalier, and shotgun microphones to camera calibration and mixer set-ups, Location Audio Simplified unlocks the secrets to clean and clear broadcast quality audio no matter what challenges you face. Author Dean Miles applies his twenty-plus years of experience as a professional location operator to teach the skills, techniques, tips, and secrets needed to produce high-quality production sound on location. Humorous and thoroughly practical, the book covers a wide array of topics, such as:* location selection* field mixing* boo

  15. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  16. Silent oceans: ocean acidification impoverishes natural soundscapes by altering sound production of the world's noisiest marine invertebrate.

    Science.gov (United States)

    Rossi, Tullio; Connell, Sean D; Nagelkerken, Ivan

    2016-03-16

    Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. © 2016 The Author(s).

  17. Underwater sound from vessel traffic reduces the effective communication range in Atlantic cod and haddock.

    Science.gov (United States)

    Stanley, Jenni A; Van Parijs, Sofie M; Hatch, Leila T

    2017-11-07

    Stellwagen Bank National Marine Sanctuary is located in Massachusetts Bay off the densely populated northeast coast of the United States; subsequently, the marine inhabitants of the area are exposed to elevated levels of anthropogenic underwater sound, particularly due to commercial shipping. The current study investigated the alteration of estimated effective communication spaces at three spawning locations for populations of the commercially and ecologically important fishes, Atlantic cod (Gadus morhua) and haddock (Melanogrammus aeglefinus). Both the ambient sound pressure levels and the estimated effective vocalization radii, estimated through spherical spreading models, fluctuated dramatically during the three-month recording periods. Increases in sound pressure level appeared to be largely driven by large vessel activity, and accordingly exhibited a significant positive correlation with the number of Automatic Identification System tracked vessels at the two of the three sites. The near constant high levels of low frequency sound and consequential reduction in the communication space observed at these recording sites during times of high vocalization activity raises significant concerns that communication between conspecifics may be compromised during critical biological periods. This study takes the first steps in evaluating these animals' communication spaces and alteration of these spaces due to anthropogenic underwater sound.

  18. Sound and vibration sensitivity of VIIIth nerve fibers in the grassfrog, Rana temporaria

    DEFF Research Database (Denmark)

    Christensen-Dalsgaard, J; Jørgensen, M B

    1996-01-01

    thresholds from 0.02 cm/s2. The sound and vibration sensitivity was compared for each fiber using the offset between the rate-level curves for sound and vibration stimulation as a measure of relative vibration sensitivity. When measured in this way relative vibration sensitivity decreases with frequency from......We have studied the sound and vibration sensitivity of 164 amphibian papilla fibers in the VIIIth nerve of the grassfrog, Rana temporaria. The VIIIth nerve was exposed using a dorsal approach. The frogs were placed in a natural sitting posture and stimulated by free-field sound. Furthermore......, the animals were stimulated with dorso-ventral vibrations, and the sound-induced vertical vibrations in the setup could be canceled by emitting vibrations in antiphase from the vibration exciter. All low-frequency fibers responded to both sound and vibration with sound thresholds from 23 dB SPL and vibration...

  19. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  20. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  1. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  2. Evidence for direct geographic influences on linguistic sounds: the case of ejectives.

    Directory of Open Access Journals (Sweden)

    Caleb Everett

    Full Text Available We present evidence that the geographic context in which a language is spoken may directly impact its phonological form. We examined the geographic coordinates and elevations of 567 language locations represented in a worldwide phonetic database. Languages with phonemic ejective consonants were found to occur closer to inhabitable regions of high elevation, when contrasted to languages without this class of sounds. In addition, the mean and median elevations of the locations of languages with ejectives were found to be comparatively high. The patterns uncovered surface on all major world landmasses, and are not the result of the influence of particular language families. They reflect a significant and positive worldwide correlation between elevation and the likelihood that a language employs ejective phonemes. In addition to documenting this correlation in detail, we offer two plausible motivations for its existence. We suggest that ejective sounds might be facilitated at higher elevations due to the associated decrease in ambient air pressure, which reduces the physiological effort required for the compression of air in the pharyngeal cavity--a unique articulatory component of ejective sounds. In addition, we hypothesize that ejective sounds may help to mitigate rates of water vapor loss through exhaled air. These explications demonstrate how a reduction of ambient air density could promote the usage of ejective phonemes in a given language. Our results reveal the direct influence of a geographic factor on the basic sound inventories of human languages.

  3. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  4. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  5. First and second sound of a unitary Fermi gas in highly oblate harmonic traps

    International Nuclear Information System (INIS)

    Hu, Hui; Dyke, Paul; Vale, Chris J; Liu, Xia-Ji

    2014-01-01

    We theoretically investigate first and second sound modes of a unitary Fermi gas trapped in a highly oblate harmonic trap at finite temperatures. Following the idea by Stringari and co-workers (2010 Phys. Rev. Lett. 105 150402), we argue that these modes can be described by the simplified two-dimensional two-fluid hydrodynamic equations. Two possible schemes—sound wave propagation and breathing mode excitation—are considered. We calculate the sound wave velocities and discretized sound mode frequencies, as a function of temperature. We find that in both schemes, the coupling between first and second sound modes is large enough to induce significant density fluctuations, suggesting that second sound can be directly observed by measuring in situ density profiles. The frequency of the second sound breathing mode is found to be highly sensitive to the superfluid density. (paper)

  6. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  7. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  8. Subjective evaluation of restaurant acoustics in a virtual sound environment

    DEFF Research Database (Denmark)

    Nielsen, Nicolaj Østergaard; Marschall, Marton; Santurette, Sébastien

    2016-01-01

    Many restaurants have smooth rigid surfaces made of wood, steel, glass, and concrete. This often results in a lack of sound absorption. Such restaurants are notorious for high sound noise levels during service that most owners actually desire for representing vibrant eating environments, although...... surveys report that noise complaints are on par with poor service. This study investigated the relation between objective acoustic parameters and subjective evaluation of acoustic comfort at five restaurants in terms of three parameters: noise annoyance, speech intelligibility, and privacy. At each...... location, customers filled out questionnaire surveys, acoustic parameters were measured, and recordings of restaurant acoustic scenes were obtained with a 64-channel spherical array. The acoustic scenes were reproduced in a virtual sound environment (VSE) with 64 loudspeakers placed in an anechoic room...

  9. Sound Exposure of Healthcare Professionals Working with a University Marching Band.

    Science.gov (United States)

    Russell, Jeffrey A; Yamaguchi, Moegi

    2018-01-01

    Music-induced hearing disorders are known to result from exposure to excessive levels of music of different genres. Marching band music, with its heavy emphasis on brass and percussion, is one type that is a likely contributor to music-induced hearing disorders, although specific data on sound pressure levels of marching bands have not been widely studied. Furthermore, if marching band music does lead to music-induced hearing disorders, the musicians may not be the only individuals at risk. Support personnel such as directors, equipment managers, and performing arts healthcare providers may also be exposed to potentially damaging sound pressures. Thus, we sought to explore to what degree healthcare providers receive sound dosages above recommended limits during their work with a marching band. The purpose of this study was to determine the sound exposure of healthcare professionals (specifically, athletic trainers [ATs]) who provide on-site care to a large, well-known university marching band. We hypothesized that sound pressure levels to which these individuals were exposed would exceed the National Institute for Occupational Safety and Health (NIOSH) daily percentage allowance. Descriptive observational study. Eight ATs working with a well-known American university marching band volunteered to wear noise dosimeters. During the marching band season, ATs wore an Etymotic ER-200D dosimeter whenever working with the band at outdoor rehearsals, indoor field house rehearsals, and outdoor performances. The dosimeters recorded dose percent exposure, equivalent continuous sound levels in A-weighted decibels, and duration of exposure. For comparison, a dosimeter also was worn by an AT working in the university's performing arts medicine clinic. Participants did not alter their typical duties during any data collection sessions. Sound data were collected with the dosimeters set at the NIOSH standards of 85 dBA threshold and 3 dBA exchange rate; the NIOSH 100% daily dose is

  10. Sound design for diesel passenger cars

    Energy Technology Data Exchange (ETDEWEB)

    Belluscio, Michele; Ruotolo, Romualdo [GM Powertrain Europe, Torino (Italy); Schoenherr, Christian; Schuster, Guenter [GM Europe, Ruesselsheim (Germany); Eisele, Georg; Genender, Peter; Wolff, Klaus; Van Keymeulen, Johan [FEV Motorentechnik GmbH, Aachen (Germany)

    2008-07-01

    With the growing market contribution of diesel passenger cars in Europe, it becomes more important to create a brand and market segment specific vehicle sound. Beside the usually considered pleasantness related topics like diesel knocking and high noise excitation, it is important to fulfil also the requirements regarding a dynamic vehicle impression. This impression is mainly influenced by the load dependency of the engine induced noise, which is reduced for diesel engines due to the missing throttle valve and the damping effect of the turbocharger and the diesel particulate filter. By means of a detailed noise transfer path analysis the contribution with dynamic potential can be identified. Furthermore the load dependency itself of a certain noise contribution can be strengthened, which allows for a dynamic sound character comparable to sporty gasoline vehicles. (orig.)

  11. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  12. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  13. Acoustic transparency and slow sound using detuned acoustic resonators

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Bozhevolnyi, Sergey I.

    2011-01-01

    We demonstrate that the phenomenon of acoustic transparency and slowsound propagation can be realized with detuned acoustic resonators (DAR), mimicking thereby the effect of electromagnetically induced transparency (EIT) in atomic physics. Sound propagation in a pipe with a series of side...

  14. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  15. Location performance objectives for the NNWSI area-to-location screening activity

    Energy Technology Data Exchange (ETDEWEB)

    Sinnock, S.; Fernandez, J.A.

    1984-01-01

    Fifty-four objectives were identified to guide the screening of the Nevada Research and Development Area of the Nevada Test Site for relatively favorable locations for the disposal of nuclear waste in a mined geologic repository. The objectives were organized as a hierarchy composed of 4 upper-level, 12 middle-level, and 38 lower-level objectives. The four upper-level objectives account for broad national goals to contain and isolate nuclear waste in an environmentally sound and economically acceptable manner. The middle-level objectives correspond to topical categories that logically relate the upper-level objectives to site-specific concerns such as seismicity, sensitive species, and flooding hazards (represented by the lower-level objectives). The relative merits of alternative locations were compared by an application of decision analysis based on standard utility theory. The relative favorabilities of pertinent physical conditions at each alternative location were weighted in relation to the importance of objectives, and summed to produce maps indicating the most and the least favorable locations. Descriptions of the objectives were organized by the hierarchical format; they detail the applicability of each objective to geologic repository siting, previously published siting criteria corresponding to each objective, and the rationale for the weight assigned to each objective, and the pertinent attributes for evaluating locations with respect to each objective. 51 references, 47 figures, 4 tables.

  16. Location performance objectives for the NNWSI area-to-location screening activity

    International Nuclear Information System (INIS)

    Sinnock, S.; Fernandez, J.A.

    1984-01-01

    Fifty-four objectives were identified to guide the screening of the Nevada Research and Development Area of the Nevada Test Site for relatively favorable locations for the disposal of nuclear waste in a mined geologic repository. The objectives were organized as a hierarchy composed of 4 upper-level, 12 middle-level, and 38 lower-level objectives. The four upper-level objectives account for broad national goals to contain and isolate nuclear waste in an environmentally sound and economically acceptable manner. The middle-level objectives correspond to topical categories that logically relate the upper-level objectives to site-specific concerns such as seismicity, sensitive species, and flooding hazards (represented by the lower-level objectives). The relative merits of alternative locations were compared by an application of decision analysis based on standard utility theory. The relative favorabilities of pertinent physical conditions at each alternative location were weighted in relation to the importance of objectives, and summed to produce maps indicating the most and the least favorable locations. Descriptions of the objectives were organized by the hierarchical format; they detail the applicability of each objective to geologic repository siting, previously published siting criteria corresponding to each objective, and the rationale for the weight assigned to each objective, and the pertinent attributes for evaluating locations with respect to each objective. 51 references, 47 figures, 4 tables

  17. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  18. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  19. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  20. Effect of sound on gap-junction-based intercellular signaling: Calcium waves under acoustic irradiation.

    Science.gov (United States)

    Deymier, P A; Swinteck, N; Runge, K; Deymier-Black, A; Hoying, J B

    2015-01-01

    We present a previously unrecognized effect of sound waves on gap-junction-based intercellular signaling such as in biological tissues composed of endothelial cells. We suggest that sound irradiation may, through temporal and spatial modulation of cell-to-cell conductance, create intercellular calcium waves with unidirectional signal propagation associated with nonconventional topologies. Nonreciprocity in calcium wave propagation induced by sound wave irradiation is demonstrated in the case of a linear and a nonlinear reaction-diffusion model. This demonstration should be applicable to other types of gap-junction-based intercellular signals, and it is thought that it should be of help in interpreting a broad range of biological phenomena associated with the beneficial therapeutic effects of sound irradiation and possibly the harmful effects of sound waves on health.

  1. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  2. Three-dimensional sound localisation with a lizard peripheral auditory model

    DEFF Research Database (Denmark)

    Kjær Schmidt, Michael; Shaikh, Danish

    the networks learned a transfer function that translated the three-dimensional non-linear mapping into estimated azimuth and elevation values for the acoustic target. The neural network with two hidden layers as expected performed better than that with only one hidden layer. Our approach assumes that for any...... location of an acoustic target in three dimensions. Our approach utilises a model of the peripheral auditory system of lizards [Christensen-Dalsgaard and Manley 2005] coupled with a multi-layer perceptron neural network. The peripheral auditory model’s response to sound input encodes sound direction...... information in a single plane which by itself is insufficient to localise the acoustic target in three dimensions. A multi-layer perceptron neural network is used to combine two independent responses of the model, corresponding to two rotational movements, into an estimate of the sound direction in terms...

  3. Nonlocal nonlinear coupling of kinetic sound waves

    Directory of Open Access Journals (Sweden)

    O. Lyubchyk

    2014-11-01

    Full Text Available We study three-wave resonant interactions among kinetic-scale oblique sound waves in the low-frequency range below the ion cyclotron frequency. The nonlinear eigenmode equation is derived in the framework of a two-fluid plasma model. Because of dispersive modifications at small wavelengths perpendicular to the background magnetic field, these waves become a decay-type mode. We found two decay channels, one into co-propagating product waves (forward decay, and another into counter-propagating product waves (reverse decay. All wavenumbers in the forward decay are similar and hence this decay is local in wavenumber space. On the contrary, the reverse decay generates waves with wavenumbers that are much larger than in the original pump waves and is therefore intrinsically nonlocal. In general, the reverse decay is significantly faster than the forward one, suggesting a nonlocal spectral transport induced by oblique sound waves. Even with low-amplitude sound waves the nonlinear interaction rate is larger than the collisionless dissipation rate. Possible applications regarding acoustic waves observed in the solar corona, solar wind, and topside ionosphere are briefly discussed.

  4. Numerical simulation of groundwater flow at Puget Sound Naval Shipyard, Naval Base Kitsap, Bremerton, Washington

    Science.gov (United States)

    Jones, Joseph L.; Johnson, Kenneth H.; Frans, Lonna M.

    2016-08-18

    Information about groundwater-flow paths and locations where groundwater discharges at and near Puget Sound Naval Shipyard is necessary for understanding the potential migration of subsurface contaminants by groundwater at the shipyard. The design of some remediation alternatives would be aided by knowledge of whether groundwater flowing at specific locations beneath the shipyard will eventually discharge directly to Sinclair Inlet of Puget Sound, or if it will discharge to the drainage system of one of the six dry docks located in the shipyard. A 1997 numerical (finite difference) groundwater-flow model of the shipyard and surrounding area was constructed to help evaluate the potential for groundwater discharge to Puget Sound. That steady-state, multilayer numerical model with homogeneous hydraulic characteristics indicated that groundwater flowing beneath nearly all of the shipyard discharges to the dry-dock drainage systems, and only shallow groundwater flowing beneath the western end of the shipyard discharges directly to Sinclair Inlet.Updated information from a 2016 regional groundwater-flow model constructed for the greater Kitsap Peninsula was used to update the 1997 groundwater model of the Puget Sound Naval Shipyard. That information included a new interpretation of the hydrogeologic units underlying the area, as well as improved recharge estimates. Other updates to the 1997 model included finer discretization of the finite-difference model grid into more layers, rows, and columns, all with reduced dimensions. This updated Puget Sound Naval Shipyard model was calibrated to 2001–2005 measured water levels, and hydraulic characteristics of the model layers representing different hydrogeologic units were estimated with the aid of state-of-the-art parameter optimization techniques.The flow directions and discharge locations predicted by this updated model generally match the 1997 model despite refinements and other changes. In the updated model, most

  5. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Science.gov (United States)

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  6. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Directory of Open Access Journals (Sweden)

    Matthew K Pine

    Full Text Available It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  7. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  8. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  9. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    Science.gov (United States)

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All

  10. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  11. Automatic adventitious respiratory sound analysis: A systematic review.

    Directory of Open Access Journals (Sweden)

    Renard Xaviero Adhi Pramono

    Full Text Available Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD, and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established.To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works.A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016 and IEEExplore (1984-2016 databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification.Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated.Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved.A total of 77 reports from the literature were included in this review. 55 (71.43% of the studies focused on wheeze, 40 (51.95% on crackle, 9 (11.69% on stridor, 9

  12. Automatic adventitious respiratory sound analysis: A systematic review.

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11

  13. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  14. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  15. Sleep disturbance caused by meaningful sounds and the effect of background noise

    Science.gov (United States)

    Namba, Seiichiro; Kuwano, Sonoko; Okamoto, Takehisa

    2004-10-01

    To study noise-induced sleep disturbance, a new procedure called "noise interrupted method"has been developed. The experiment is conducted in the bedroom of the house of each subject. The sounds are reproduced with a mini-disk player which has an automatic reverse function. If the sound is disturbing and subjects cannot sleep, they are allowed to switch off the sound 1 h after they start to try to sleep. This switch off (noise interrupted behavior) is an important index of sleep disturbance. Next morning they fill in a questionnaire in which quality of sleep, disturbance of sounds, the time when they switched off the sound, etc. are asked. The results showed a good relationship between L and the percentages of the subjects who could not sleep in an hour and between L and the disturbance reported in the questionnaire. This suggests that this method is a useful tool to measure the sleep disturbance caused by noise under well-controlled conditions.

  16. Acoustical measurements of sound fields between the stage and the orchestra pit inside an historical opera house

    Science.gov (United States)

    Sato, Shin-Ichi; Prodi, Nicola; Sakai, Hiroyuki

    2004-05-01

    To clarify the relationship of the sound fields between the stage and the orchestra pit, we conducted acoustical measurements in a typical historical opera house, the Teatro Comunale of Ferrara, Italy. Orthogonal factors based on the theory of subjective preference and other related factors were analyzed. First, the sound fields for a singer on the stage in relation to the musicians in the pit were analyzed. And then, the sound fields for performers in the pit in relation to the singers on the stage were considered. Because physical factors vary depending on the location of the sound source, performers can move on the stage or in the pit to find the preferred sound field.

  17. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  18. A preliminary census of engineering activities located in Sicily (Southern Italy) which may "potentially" induce seismicity

    Science.gov (United States)

    Aloisi, Marco; Briffa, Emanuela; Cannata, Andrea; Cannavò, Flavio; Gambino, Salvatore; Maiolino, Vincenza; Maugeri, Roberto; Palano, Mimmo; Privitera, Eugenio; Scaltrito, Antonio; Spampinato, Salvatore; Ursino, Andrea; Velardita, Rosanna

    2015-04-01

    The seismic events caused by human engineering activities are commonly termed as "triggered" and "induced". This class of earthquakes, though characterized by low-to-moderate magnitude, have significant social and economical implications since they occur close to the engineering activity responsible for triggering/inducing them and can be felt by the inhabitants living nearby, and may even produce damage. One of the first well-documented examples of induced seismicity was observed in 1932 in Algeria, when a shallow magnitude 3.0 earthquake occurred close to the Oued Fodda Dam. By the continuous global improvement of seismic monitoring networks, numerous other examples of human-induced earthquakes have been identified. Induced earthquakes occur at shallow depths and are related to a number of human activities, such as fluid injection under high pressure (e.g. waste-water disposal in deep wells, hydrofracturing activities in enhanced geothermal systems and oil recovery, shale-gas fracking, natural and CO2 gas storage), hydrocarbon exploitation, groundwater extraction, deep underground mining, large water impoundments and underground nuclear tests. In Italy, induced/triggered seismicity is suspected to have contributed to the disaster of the Vajont dam in 1963. Despite this suspected case and the presence in the Italian territory of a large amount of engineering activities "capable" of inducing seismicity, no extensive researches on this topic have been conducted to date. Hence, in order to improve knowledge and correctly assess the potential hazard at a specific location in the future, here we started a preliminary study on the entire range of engineering activities currently located in Sicily (Southern Italy) which may "potentially" induce seismicity. To this end, we performed: • a preliminary census of all engineering activities located in the study area by collecting all the useful information coming from available on-line catalogues; • a detailed compilation

  19. An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air

    Science.gov (United States)

    Papacosta, Pangratios; Linscheid, Nathan

    2016-01-01

    Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the…

  20. Associative cueing of attention through implicit feature-location binding.

    Science.gov (United States)

    Girardi, Giovanna; Nico, Daniele

    2017-09-01

    In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Alpha reactivity to complex sounds differs during REM sleep and wakefulness.

    Directory of Open Access Journals (Sweden)

    Perrine Ruby

    Full Text Available We aimed at better understanding the brain mechanisms involved in the processing of alerting meaningful sounds during sleep, investigating alpha activity. During EEG acquisition, subjects were presented with a passive auditory oddball paradigm including rare complex sounds called Novels (the own first name - OWN, and an unfamiliar first name - OTHER while they were watching a silent movie in the evening or sleeping at night. During the experimental night, the subjects' quality of sleep was generally preserved. During wakefulness, the decrease in alpha power (8-12 Hz induced by Novels was significantly larger for OWN than for OTHER at parietal electrodes, between 600 and 900 ms after stimulus onset. Conversely, during REM sleep, Novels induced an increase in alpha power (from 0 to 1200 ms at all electrodes, significantly larger for OWN than for OTHER at several parietal electrodes between 700 and 1200 ms after stimulus onset. These results show that complex sounds have a different effect on the alpha power during wakefulness (decrease and during REM sleep (increase and that OWN induce a specific effect in these two states. The increased alpha power induced by Novels during REM sleep may 1 correspond to a short and transient increase in arousal; in this case, our study provides an objective measure of the greater arousing power of OWN over OTHER, 2 indicate a cortical inhibition associated with sleep protection. These results suggest that alpha modulation could participate in the selection of stimuli to be further processed during sleep.

  2. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  3. Effects of wind turbine wake on atmospheric sound propagation

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    In this paper, we investigate the sound propagation from a wind turbine considering the effects of wake-induced velocity deficit and turbulence. In order to address this issue, an advanced approach was developed in which both scalar and vector parabolic equations in two dimensions are solved. Flow...

  4. Anti-bat tiger moth sounds: Form and function

    Directory of Open Access Journals (Sweden)

    Aaron J. CORCORAN, William E. CONNER, Jesse R. BARBER

    2010-06-01

    Full Text Available The night sky is the venue of an ancient acoustic battle between echolocating bats and their insect prey. Many tiger moths (Lepidoptera: Arctiidae answer the attack calls of bats with a barrage of high frequency clicks. Some moth species use these clicks for acoustic aposematism and mimicry, and others for sonar jamming, however, most of the work on these defensive functions has been done on individual moth species. We here analyze the diversity of structure in tiger moth sounds from 26 species collected at three locations in North and South America. A principal components analysis of the anti-bat tiger moth sounds reveals that they vary markedly along three axes: (1 frequency, (2 duty cycle (sound production per unit time and frequency modulation, and (3 modulation cycle (clicks produced during flexion and relaxation of the sound producing tymbal structure. Tiger moth species appear to cluster into two distinct groups: one with low duty cycle and few clicks per modulation cycle that supports an acoustic aposematism function, and a second with high duty cycle and many clicks per modulation cycle that is consistent with a sonar jamming function. This is the first evidence from a community-level analysis to support multiple functions for tiger moth sounds. We also provide evidence supporting an evolutionary history for the development of these strategies. Furthermore, cross-correlation and spectrogram correlation measurements failed to support a “phantom echo” mechanism underlying sonar jamming, and instead point towards echo interference [Current Zoology 56 (3: 358–369, 2010].

  5. Masking release by combined spatial and masker-fluctuation effects in the open sound field.

    Science.gov (United States)

    Middlebrooks, John C

    2017-12-01

    In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated. Comodulation masking release (CMR) was largely independent of signal location at a higher masker sound level, but at a lower level CMR was reduced for the lateral signal location. In Experiment 2, a brief signal was positioned at the envelope maximum (peak) or minimum (dip) of a 50-Hz-wide on-signal masker. Masking was released in dip more than in peak conditions only for the 90° signal. Overall, open-field SRM was greater in magnitude than binaural masking release reported in comparable closed-field studies, and envelope-related release was somewhat weaker. Mutual enhancement of masking release by spatial and envelope-related effects tended to increase with increasing masker level.

  6. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    Science.gov (United States)

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  7. Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound

    Energy Technology Data Exchange (ETDEWEB)

    Brian Polagye; Jim Thomson; Chris Bassett; Jason Wood; Dom Tollit; Robert Cavagnaro; Andrea Copping

    2012-03-30

    Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbines as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic

  8. Effects of wing locations on wing rock induced by forebody vortices

    Directory of Open Access Journals (Sweden)

    Ma Baofeng

    2016-10-01

    Full Text Available Previous studies have shown that asymmetric vortex wakes over slender bodies exhibit a multi-vortex structure with an alternate arrangement along a body axis at high angle of attack. In this investigation, the effects of wing locations along a body axis on wing rock induced by forebody vortices was studied experimentally at a subcritical Reynolds number based on a body diameter. An artificial perturbation was added onto the nose tip to fix the orientations of forebody vortices. Particle image velocimetry was used to identify flow patterns of forebody vortices in static situations, and time histories of wing rock were obtained using a free-to-roll rig. The results show that the wing locations can affect significantly the motion patterns of wing rock owing to the variation of multi-vortex patterns of forebody vortices. As the wing locations make the forebody vortices a two-vortex pattern, the wing body exhibits regularly divergence and fixed-point motion with azimuthal variations of the tip perturbation. If a three-vortex pattern exists over the wing, however, the wing-rock patterns depend on the impact of the highest vortex and newborn vortex. As the three vortices together influence the wing flow, wing-rock patterns exhibit regularly fixed-points and limit-cycled oscillations. With the wing moving backwards, the newborn vortex becomes stronger, and wing-rock patterns become fixed-points, chaotic oscillations, and limit-cycled oscillations. With further backward movement of wings, the vortices are far away from the upper surface of wings, and the motions exhibit divergence, limit-cycled oscillations and fixed-points. For the rearmost location of the wing, the wing body exhibits stochastic oscillations and fixed-points.

  9. An algorithm for leak locating using coupled vibration of pipe-water

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2004-01-01

    Leak noise is a good source to identify the exact location of a leak point of underground water pipelines. Water leak generates broadband sound from a leak location and this sound propagation due to leak in water pipelines is not a non-dispersive wave any more because of the surrounding pipes and soil. However, the necessity of long-range detection of this leak location makes to identify low-frequency acoustic waves rather than high frequency ones. Acoustic wave propagation coupled with surrounding boundaries including cast iron pipes is theoretically analyzed and the wave velocity was confirmed with experiment. The leak locations were identified both by the Acoustic Emission (AE) method and the cross-correlation method. In a short-range distance, both the AE method and cross-correlation method are effective to detect leak position. However, the detection for a long-range distance required a lower frequency range accelerometers only because higher frequency waves were attenuated very quickly with the increase of propagation paths. Two algorithms for the cross-correlation function were suggested, and a long-range detection has been achieved at real underground water pipelines longer than 300m

  10. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  11. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  12. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  13. The effect of methacholine-induced acute airway narrowing on lung sounds in normal and asthmatic subjects

    NARCIS (Netherlands)

    Schreur, H. J.; Vanderschoot, J.; Zwinderman, A. H.; Dijkman, J. H.; Sterk, P. J.

    1995-01-01

    The association between lung sound alterations and airways obstruction has long been recognized in clinical practice, but the precise pathophysiological mechanisms of this relationship have not been determined. Therefore, we examined the changes in lung sounds at well-defined levels of

  14. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  15. The effect of vocal fold vertical stiffness gradient on sound production

    Science.gov (United States)

    Geng, Biao; Xue, Qian; Zheng, Xudong

    2015-11-01

    It is observed in some experimental studies on canine vocal folds (VFs) that the inferior aspect of the vocal fold (VF) is much stiffer than the superior aspect under relatively large strain. Such vertical difference is supposed to promote the convergent-divergent shape during VF vibration and consequently facilitate the production of sound. In this study, we investigate the effect of vertical variation of VF stiffness on sound production using a numerical model. The vertical variation of stiffness is produced by linearly increasing the Young's modulus and shear modulus from the superior to inferior aspects in the cover layer, and its effect on phonation is examined in terms of aerodynamic and acoustic quantities such as flow rate, open quotient, skewness of flow wave form, sound intensity and vocal efficiency. The flow-induced vibration of the VF is solved with a finite element solver coupled with 1D Bernoulli equation, which is further coupled with a digital waveguide model. This study is designed to find out whether it's beneficial to artificially induce the vertical stiffness gradient by certain implanting material in VF restoring surgery, and if it is beneficial, what gradient is the most favorable.

  16. Sound velocity variation as function of polarization state in Lead Zirconate Titanate (PZT) Ceramics

    International Nuclear Information System (INIS)

    Essolaani, W; Farhat, N

    2012-01-01

    There are several ultrasonic techniques to measure the sound velocity, for example, the pulse-echo method. In such method, the size of transducer used to measure the sound velocity must be in the same order of the sample size. If not, the incompatibility of sizes becomes an error source of the sound velocity measurement. In this work, the Laser Induced Pressure Pulse (LIPP) method is used as ultrasonic method. This method has been very useful for studying the spatial distribution of charges and polarization in dielectrics. We take advantage of the fact that the method allows the sound velocity measurement, to study its variation as function of polarization state in (PZT) ceramics. In a sample with a known thickness e, the sound velocity ν is deduced from the measurement of the transit time T. The sound velocity depends on the elastic constants which in turn they depend on poling conditions. Thus, the variation of the sound velocity is related to the direction and the amplitude of the polarization.

  17. Mapping localised freshwater anomalies in the brackish paleo-lake sediments of the Machile–Zambezi Basin with transient electromagnetic sounding, geoelectrical imaging and induced polarisation

    DEFF Research Database (Denmark)

    Chongo, Mkhuzo; Christiansen, Anders Vest; Fiandaca, Gianluca

    2015-01-01

    A recent airborne TEM survey in the Machile–Zambezi Basin of south western Zambia revealed high electrical resistivity anomalies (around 100 Ωm) in a low electrical resistivity (below 13 Ωm) background. The near surface (0–40 m depth range) electrical resistivity distribution of these anomalies...... appeared to be coincident with superficial features related to surface water such as alluvial fans and flood plains. This paper describes the application of transient electromagnetic soundings (TEM) and continuous vertical electrical sounding (CVES) using geo-electrics and time domain induced polarisation...... thins out and deteriorates in water quality further inland. It is postulated that the freshwater lens originated as a result of interaction between the Zambezi River and the salty aquifer in a setting in which evapotranspiration is the net climatic stress. Similar high electrical resistivity bodies were...

  18. The Modulated Sounds Made by the Tsetse Fly Glossina Brevipalpis ...

    African Journals Online (AJOL)

    The modulated sounds made by Glossina brevipalpis are physiologically and refiexly induced phenomena, produced by muscular vibrations in the pterothorax. The patterns and physical nature of the calls and songs were investigated acoustically, spectrographically and oscilloscopically to explore the possibility of a ...

  19. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  20. Deflection of resilient materials for reduction of floor impact sound.

    Science.gov (United States)

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.

  1. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  2. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  3. Color improves “visual” acuity via sound

    OpenAIRE

    Levy-Tzedek, Shelly; Riemer, Dar; Amedi, Amir

    2014-01-01

    Visual-to-auditory sensory substitution devices (SSDs) convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location, and color information into musical notes. We tested the “visual” acuity of 23 individuals (13 blind and 10 blindfolded sighted) on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of...

  4. Atypical pattern of discriminating sound features in adults with Asperger syndrome as reflected by the mismatch negativity.

    Science.gov (United States)

    Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R

    2007-04-01

    Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.

  5. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  6. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  7. Lymphocytes on sounding rocket flights.

    Science.gov (United States)

    Cogoli-Greuter, M; Pippia, P; Sciola, L; Cogoli, A

    1994-05-01

    Cell-cell interactions and the formation of cell aggregates are important events in the mitogen-induced lymphocyte activation. The fact that the formation of cell aggregates is only slightly reduced in microgravity suggests that cells are moving and interacting also in space, but direct evidence was still lacking. Here we report on two experiments carried out on a flight of the sounding rocket MAXUS 1B, launched in November 1992 from the base of Esrange in Sweden. The rocket reached the altitude of 716 km and provided 12.5 min of microgravity conditions.

  8. Contingent sounds change the mental representation of one's finger length.

    Science.gov (United States)

    Tajadura-Jiménez, Ana; Vakali, Maria; Fairhurst, Merle T; Mandrigin, Alisa; Bianchi-Berthouze, Nadia; Deroy, Ophelia

    2017-07-18

    Mental body-representations are highly plastic and can be modified after brief exposure to unexpected sensory feedback. While the role of vision, touch and proprioception in shaping body-representations has been highlighted by many studies, the auditory influences on mental body-representations remain poorly understood. Changes in body-representations by the manipulation of natural sounds produced when one's body impacts on surfaces have recently been evidenced. But will these changes also occur with non-naturalistic sounds, which provide no information about the impact produced by or on the body? Drawing on the well-documented capacity of dynamic changes in pitch to elicit impressions of motion along the vertical plane and of changes in object size, we asked participants to pull on their right index fingertip with their left hand while they were presented with brief sounds of rising, falling or constant pitches, and in the absence of visual information of their hands. Results show an "auditory Pinocchio" effect, with participants feeling and estimating their finger to be longer after the rising pitch condition. These results provide the first evidence that sounds that are not indicative of veridical movement, such as non-naturalistic sounds, can induce a Pinocchio-like change in body-representation when arbitrarily paired with a bodily action.

  9. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  10. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  11. EXTRACTION OF SPATIAL PARAMETERS FROM CLASSIFIED LIDAR DATA AND AERIAL PHOTOGRAPH FOR SOUND MODELING

    Directory of Open Access Journals (Sweden)

    S. Biswas

    2012-07-01

    Full Text Available Prediction of outdoor sound levels in 3D space is important for noise management, soundscaping etc. Sound levels at outdoor can be predicted using sound propagation models which need terrain parameters. The existing practices of incorporating terrain parameters into models are often limited due to inadequate data or inability to determine accurate sound transmission paths through a terrain. This leads to poor accuracy in modelling. LIDAR data and Aerial Photograph (or Satellite Images provide opportunity to incorporate high resolution data into sound models. To realize this, identification of building and other objects and their use for extraction of terrain parameters are fundamental. However, development of a suitable technique, to incorporate terrain parameters from classified LIDAR data and Aerial Photograph, for sound modelling is a challenge. Determination of terrain parameters along various transmission paths of sound from sound source to a receiver becomes very complex in an urban environment due to the presence of varied and complex urban features. This paper presents a technique to identify the principal paths through which sound transmits from source to receiver. Further, the identified principal paths are incorporated inside the sound model for sound prediction. Techniques based on plane cutting and line tracing are developed for determining principal paths and terrain parameters, which use various information, e.g., building corner and edges, triangulated ground, tree points and locations of source and receiver. The techniques developed are validated through a field experiment. Finally efficacy of the proposed technique is demonstrated by developing a noise map for a test site.

  12. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  13. Chronic early postnatal scream sound stress induces learning deficits and NMDA receptor changes in the hippocampus of adult mice.

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Wang, Jue; Song, Tusheng; Huang, Chen

    2016-04-13

    Chronic scream sounds during adulthood affect spatial learning and memory, both of which are sexually dimorphic. The long-term effects of chronic early postnatal scream sound stress (SSS) during postnatal days 1-21 (P1-P21) on spatial learning and memory in adult mice as well as whether or not these effects are sexually dimorphic are unknown. Therefore, the present study examines the performance of adult male and female mice in the Morris water maze following exposure to chronic early postnatal SSS. Hippocampal NR2A and NR2B levels as well as NR2A/NR2B subunit ratios were tested using immunohistochemistry. In the Morris water maze, stress males showed greater impairment in spatial learning and memory than background males; by contrast, stress and background females performed equally well. NR2B levels in CA1 and CA3 were upregulated, whereas NR2A/NR2B ratios were downregulated in stressed males, but not in females. These data suggest that chronic early postnatal SSS influences spatial learning and memory ability, levels of hippocampal NR2B, and NR2A/NR2B ratios in adult males. Moreover, chronic early stress-induced alterations exert long-lasting effects and appear to affect performance in a sex-specific manner.

  14. Magnetospheric radio sounding

    International Nuclear Information System (INIS)

    Ondoh, Tadanori; Nakamura, Yoshikatsu; Koseki, Teruo; Watanabe, Sigeaki; Murakami, Toshimitsu

    1977-01-01

    Radio sounding of the plasmapause from a geostationary satellite has been investigated to observe time variations of the plasmapause structure and effects of the plasma convection. In the equatorial plane, the plasmapause is located, on the average, at 4 R sub(E) (R sub(E); Earth radius), and the plasma density drops outwards from 10 2 -10 3 /cm 3 to 1-10/cm 3 in the plasmapause width of about 600 km. Plasmagrams showing a relation between the virtual range and sounding frequencies are computed by ray tracing of LF-VLF waves transmitted from a geostationary satellite, using model distributions of the electron density in the vicinity of the plasmapause. The general features of the plasmagrams are similar to the topside ionograms. The plasmagram has no penetration frequency such as f 0 F 2 , but the virtual range of the plasmagram increases rapidly with frequency above 100 kHz, since the distance between a satellite and wave reflection point increases rapidly with increasing the electron density inside the plasmapause. The plasmapause sounder on a geostationary satellite has been designed by taking account of an average propagation distance of 2 x 2.6 R sub(E) between a satellite (6.6 R sub(E)) and the plasmapause (4.0 R sub(E)), background noise, range resolution, power consumption, and receiver S/N of 10 dB. The 13-bit Barker coded pulses of baud length of 0.5 msec should be transmitted in direction parallel to the orbital plane at frequencies for 10 kHz-2MHz in a pulse interval of 0.5 sec. The transmitter peak power of 70 watts and 700 watts are required respectively in geomagnetically quiet and disturbed (strong nonthermal continuum emissions) conditions for a 400 meter cylindrical dipole of 1.2 cm diameter on the geostationary satellite. This technique will open new area of radio sounding in the magnetosphere. (auth.)

  15. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  16. Depth study of insular shelf electric sounding Adelaida anomaly (Rivera)

    International Nuclear Information System (INIS)

    Cicalese, H.

    1983-01-01

    In the framework of the Uranium prospecting programme, the DINAMIGE geophysical equipment have made a study. It was about the depth of insular shelf electric sounding on the anomalies zone of Adelaida. This equipment carried out a study of the following subjects: geographical location, geologic framework, geophysical intervention, developed works, methods and material and results

  17. Water quality monitoring and data collection in the Mississippi sound

    Science.gov (United States)

    Runner, Michael S.; Creswell, R.

    2002-01-01

    The United States Geological Survey and the Mississippi Department of Marine Resources are collecting data on the quality of the water in the Mississippi Sound of the Gulf of Mexico, and streamflow data for its tributaries. The U.S. Geological Survey is collecting continuous water-level data, continuous and discrete water-temperature data, continuous and discrete specific-conductance data, as well as chloride and salinity samples at two locations in the Mississippi Sound and three Corps of Engineers tidal gages. Continuous-discharge data are also being collected at two additional stations on tributaries. The Mississippi Department of Marine Resources collects water samples at 169 locations in the Gulf of Mexico. Between 1800 and 2000 samples are collected annually which are analyzed for turbidity and fecal coliform bacteria. The continuous data are made available real-time through the internet and are being used in conjunction with streamflow data, weather data, and sampling data for the monitoring and management of the oyster reefs, the shrimp fishery and other marine species and their habitats.

  18. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  19. What characterizes changing-state speech in affecting short-term memory? An EEG study on the irrelevant sound effect.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weisz, Nathan; Bertrand, Olivier

    2011-12-01

    The irrelevant sound effect (ISE) describes reduced verbal short-term memory during irrelevant changing-state sounds which consist of different and distinct auditory tokens. Steady-state sounds lack such changing-state features and do not impair performance. An EEG experiment (N=16) explored the distinguishing neurophysiological aspects of detrimental changing-state speech (3-token sequence) compared to ineffective steady-state speech (1-token sequence) on serial recall performance. We analyzed evoked and induced activity related to the memory items as well as spectral activity during the retention phase. The main finding is that the behavioral sound effect was exclusively reflected by attenuated token-induced gamma activation most pronounced between 50-60 Hz and 50-100 ms post-stimulus onset. Changing-state speech seems to disrupt a behaviorally relevant ongoing process during target presentation (e.g., the serial binding of the items). Copyright © 2011 Society for Psychophysiological Research.

  20. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  1. A new IEA document for the measurement of noise immission from wind turbines at receptor locations

    International Nuclear Information System (INIS)

    Ljunggren, Sten

    1999-01-01

    A new IEA guide on acoustic noise was recently completed by an international expert group. In this guide, several practical and reliable methods for determining wind turbine noise immission at receptor locations are presented: three methods for equivalent continuous A-weighted sound pressure levels and one method for A-weighted percentiles. In the most ambitious method for equivalent sound levels, the noise is measured together with the wind speed at two locations: one at the microphone and the other at the turbine site. With this approach, the turbine levels can be corrected for background sound and the immission level can be determined at a certain target speed. Special importance is attached to the problem of correcting for background noise and to techniques for improving the signal-to-noise ratio. Thus, six methods are described which can be used in difficult situations

  2. Control of Toxic Chemicals in Puget Sound, Phase 3: Study of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    Energy Technology Data Exchange (ETDEWEB)

    Brandenberger, Jill M.; Louchouarn, Patrick; Kuo, Li-Jung; Crecelius, Eric A.; Cullinan, Valerie I.; Gill, Gary A.; Garland, Charity R.; Williamson, J. B.; Dhammapala, R.

    2010-07-05

    The results of the Phase 1 Toxics Loading study suggested that runoff from the land surface and atmospheric deposition directly to marine waters have resulted in considerable loads of contaminants to Puget Sound (Hart Crowser et al. 2007). The limited data available for atmospheric deposition fluxes throughout Puget Sound was recognized as a significant data gap. Therefore, this study provided more recent or first reported atmospheric deposition fluxes of PAHs, PBDEs, and select trace elements for Puget Sound. Samples representing bulk atmospheric deposition were collected during 2008 and 2009 at seven stations around Puget Sound spanning from Padilla Bay south to Nisqually River including Hood Canal and the Straits of Juan de Fuca. Revised annual loading estimates for atmospheric deposition to the waters of Puget Sound were calculated for each of the toxics and demonstrated an overall decrease in the atmospheric loading estimates except for polybrominated diphenyl ethers (PBDEs) and total mercury (THg). The median atmospheric deposition flux of total PBDE (7.0 ng/m2/d) was higher than that of the Hart Crowser (2007) Phase 1 estimate (2.0 ng/m2/d). The THg was not significantly different from the original estimates. The median atmospheric deposition flux for pyrogenic PAHs (34.2 ng/m2/d; without TCB) shows a relatively narrow range across all stations (interquartile range: 21.2- 61.1 ng/m2/d) and shows no influence of season. The highest median fluxes for all parameters were measured at the industrial location in Tacoma and the lowest were recorded at the rural sites in Hood Canal and Sequim Bay. Finally, a semi-quantitative apportionment study permitted a first-order characterization of source inputs to the atmosphere of the Puget Sound. Both biomarker ratios and a principal component analysis confirmed regional data from the Puget Sound and Straits of Georgia region and pointed to the predominance of biomass and fossil fuel (mostly liquid petroleum products such

  3. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  4. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    Science.gov (United States)

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association

  5. The effect of scattering on sound field control with a circular double-layer array of loudspeakers

    DEFF Research Database (Denmark)

    Chang, Jiho; Jacobsen, Finn

    2012-01-01

    A recent study has shown that a circular double-layer array of loudspeakers makes it possible to achieve a sound field control that can generate a controlled field inside the array and reduce sound waves propagating outside the array. This is useful if it is desirable not to disturb people outside...... the array or to prevent the effect of reflections from the room. The study assumed free field condition, however in practice a listener will be located inside the array. The listener scatters sound waves, which propagate outward. Consequently, the scattering effect can be expected to degrade the performance...

  6. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  7. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  8. Pressure sound level measurements at an educational environment in Goiania, Goias, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Jhonatha J.L.; Nascimento, Eriberto O. do; Oliveira, Lucas N. de [Instituto Federal de Educação, Ciência e Tecnologia de Goiás (IFG), Goiânia, GO (Brazil); Caldas, Linda V. E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this work, twenty five points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiania, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institution, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiania for all points. (author)

  9. Pressure sound level measurements at an educational environment in Goiania, Goias, Brazil

    International Nuclear Information System (INIS)

    Costa, Jhonatha J.L.; Nascimento, Eriberto O. do; Oliveira, Lucas N. de; Caldas, Linda V. E.

    2017-01-01

    In this work, twenty five points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiania, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institution, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiania for all points. (author)

  10. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  11. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  12. Use Of Vertical Electrical Sounding Survey For Study Groundwater In NISSAH Region, SAUDI ARABIA

    Science.gov (United States)

    Alhenaki, Bander; Alsoma, Ali

    2015-04-01

    The aim of this research is to investigate groundwater depth in desert and dry environmental conditions area . The study site located in Wadi Nisah-eastern part of Najd province (east-central of Saudi Arabia), Generally, the study site is underlain by Phanerozoic sedimentary rocks of the western edge of the Arabian platform, which rests on Proterozoic basement at depths ranged between 5-8km. Another key objective of this research is to assess the water-table and identify the bearing layers structures study area by using Vertical Electrical Sounding (VES) 1D imaging technique. We have been implemented and acquired a sections of 315 meter vertical electrical soundings using Schlumberger field arrangements . These dataset were conducted along 9 profiles. The resistivity Schlumberger sounding was carried with half-spacing in the range 500 . The VES survey intend to cover several locations where existing wells information may be used for correlations. also location along the valley using the device Syscal R2 The results of this study concluded that there are at least three sedimentary layers to a depth of 130 meter. First layer, extending from the surface to a depth of about 3 meter characterized by dry sandy layer and high resistivity value. The second layer, underlain the first layer to a depth of 70 meter. This layer has less resistant compare to the first layer. Last layer, has low resistivity values of 20 ohm .m to a depth of 130 meter blow ground surface. We have observed a complex pattern of groundwater depth (ranging from 80 meter to 120 meter) which may reflect the lateral heterogeneity of study site. The outcomes of this research has been used to locate the suitable drilling locations.

  13. Combined multibeam and bathymetry data from Rhode Island Sound and Block Island Sound: a regional perspective

    Science.gov (United States)

    Poppe, Lawrence J.; McMullen, Katherine Y.; Danforth, William W.; Blankenship, Mark R.; Clos, Andrew R.; Glomb, Kimberly A.; Lewit, Peter G.; Nadeau, Megan A.; Wood, Douglas A.; Parker, Castleton E.

    2014-01-01

    Detailed bathymetric maps of the sea floor in Rhode Island and Block Island Sounds are of great interest to the New York, Rhode Island, and Massachusetts research and management communities because of this area's ecological, recreational, and commercial importance. Geologically interpreted digital terrain models from individual surveys provide important benthic environmental information, yet many applications of this information require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 14 contiguous multibeam bathymetric datasets that were produced by the National Oceanic and Atmospheric Administration during charting operations into one digital terrain model that covers much of Block Island Sound and extends eastward across Rhode Island Sound. The new dataset, which covers over 1244 square kilometers, is adjusted to mean lower low water, gridded to 4-meter resolution, and provided in Universal Transverse Mercator Zone 19, North American Datum of 1983 and geographic World Geodetic Survey of 1984 projections. This resolution is adequate for sea-floor feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the data include boulder lag deposits of winnowed Pleistocene strata, sand-wave fields, and scour depressions that reflect the strength of oscillating tidal currents and scour by storm-induced waves. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic features visible in the data include shipwrecks and dredged channels. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for

  14. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  15. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  16. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  17. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    Science.gov (United States)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  18. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Directory of Open Access Journals (Sweden)

    Eduardo A Garza Villarreal

    Full Text Available Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  19. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    Science.gov (United States)

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  20. Broadband low-frequency sound isolation by lightweight adaptive metamaterials

    Science.gov (United States)

    Liao, Yunhong; Chen, Yangyang; Huang, Guoliang; Zhou, Xiaoming

    2018-03-01

    Blocking broadband low-frequency airborne noises is highly desirable in lots of engineering applications, while it is extremely difficult to be realized with lightweight materials and/or structures. Recently, a new class of lightweight adaptive metamaterials with hybrid shunting circuits has been proposed, demonstrating super broadband structure-borne bandgaps. In this study, we aim at examining their potentials in broadband sound isolation by establishing an analytical model that rigorously combines the piezoelectric dynamic couplings between adaptive metamaterials and acoustics. Sound transmission loss of the adaptive metamaterial is investigated with respect to both the frequency and angular spectrum to demonstrate their sound-insulation effects. We find that efficient sound isolation can indeed be pursued in the broadband bi-spectrum for not only the case of the small resonator's periodicity where only one mode relevant to the mass-spring resonance exists, but also for the large-periodicity scenario, so that the total weight can be even lighter, in which the multiple plate-resonator coupling modes appear. In the latter case, the negative spring stiffness provided by the piezoelectric stack has been utilized to suppress the resonance-induced high acoustic transmission. Such kinds of adaptive metamaterials could open a new approach for broadband noise isolation with extremely lightweight structures.

  1. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  2. Regarding "A new method for predicting nonlinear structural vibrations induced by ground impact loading" [Journal of Sound and Vibration, 331/9 (2012) 2129-2140

    Science.gov (United States)

    Cartmell, Matthew P.

    2016-09-01

    The Editor wishes to make the reader aware that the paper "A new method for predicting nonlinear structural vibrations induced by ground impact loading" by Jun Liu, Yu Zhang, Bin Yun, Journal of Sound and Vibration, 331 (2012) 2129-2140, did not contain a direct citation of the fundamental and original work in this field by Dr. Mark Svinkin. The Editor regrets that this omission was not noted at the time that the above paper was accepted and published.

  3. Spatial Statistics of Deep-Water Ambient Noise; Dispersion Relations for Sound Waves and Shear Waves

    Science.gov (United States)

    2015-09-30

    bi-linear hydrophone 8 array to locate biological sound sources on a coral reef ”, J. Acoust. Soc. Am. 137, 30-41 (2015) [published, refereed]. 3...Friedlander, A. K. Gregg, S. A. Sandin and M. J. Buckingham, “The origins of ambient biological sound from coral reef ecosystems in the Line Islands...descending under gravity and, after releasing a drop weight at a pre-assigned depth, returning to the surface under buoyancy. Throughout the descent and

  4. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  5. A data-assimilative ocean forecasting system for the Prince William sound and an evaluation of its performance during sound Predictions 2009

    Science.gov (United States)

    Farrara, John D.; Chao, Yi; Li, Zhijin; Wang, Xiaochun; Jin, Xin; Zhang, Hongchun; Li, Peggy; Vu, Quoc; Olsson, Peter Q.; Schoch, G. Carl; Halverson, Mark; Moline, Mark A.; Ohlmann, Carter; Johnson, Mark; McWilliams, James C.; Colas, Francois A.

    2013-07-01

    Island Passage and Montague Strait entrance. During the latter part of the second week when surface winds were light and southwesterly, the mean surface flow at the Hinchinbrook Entrance reversed to weak outflow and a cyclonic eddy formed in the central Sound. Overall, RMS differences between ROMS surface currents and observed HF radar surface currents in the central Sound were generally between 5 and 10cm/s, about 20-40% of the time mean current speeds.The ROMS reanalysis is then validated against independent observations. A comparison of the ROMS currents with observed vertical current profiles from moored ADCPs in the Hinchinbrook Entrance and Montague Strait shows good qualitative agreement and confirms the evolution of the near surface inflow/outflow at these locations described above. A comparison of the ROMS surface currents with drifter trajectories provided additional confirmation that the evolution of the surface flow described above was realistic. Forecasts of drifter locations had RMS errors of less than 10km for up to 36h. One and two-day forecasts of surface temperature, salinity and current fields were more skillful than persistence forecasts. In addition, ensemble mean forecasts were found to be slightly more skillful than single forecasts. Two case studies demonstrated the system's qualitative skill in predicting subsurface changes within the mixed layer measured by ships and autonomous underwater vehicles. In summary, the system is capable of producing a realistic evolution of the near-surface circulation within PWS including forecasts of up to two days of this evolution. Use of the products provided by the system during the experiment as part of the asset deployment decision making process demonstrated the value of accurate regional ocean forecasts in support of field experiments.

  6. Fin whale sound reception mechanisms: skull vibration enables low-frequency hearing.

    Directory of Open Access Journals (Sweden)

    Ted W Cranford

    Full Text Available Hearing mechanisms in baleen whales (Mysticeti are essentially unknown but their vocalization frequencies overlap with anthropogenic sound sources. Synthetic audiograms were generated for a fin whale by applying finite element modeling tools to X-ray computed tomography (CT scans. We CT scanned the head of a small fin whale (Balaenoptera physalus in a scanner designed for solid-fuel rocket motors. Our computer (finite element modeling toolkit allowed us to visualize what occurs when sounds interact with the anatomic geometry of the whale's head. Simulations reveal two mechanisms that excite both bony ear complexes, (1 the skull-vibration enabled bone conduction mechanism and (2 a pressure mechanism transmitted through soft tissues. Bone conduction is the predominant mechanism. The mass density of the bony ear complexes and their firmly embedded attachments to the skull are universal across the Mysticeti, suggesting that sound reception mechanisms are similar in all baleen whales. Interactions between incident sound waves and the skull cause deformations that induce motion in each bony ear complex, resulting in best hearing sensitivity for low-frequency sounds. This predominant low-frequency sensitivity has significant implications for assessing mysticete exposure levels to anthropogenic sounds. The din of man-made ocean noise has increased steadily over the past half century. Our results provide valuable data for U.S. regulatory agencies and concerned large-scale industrial users of the ocean environment. This study transforms our understanding of baleen whale hearing and provides a means to predict auditory sensitivity across a broad spectrum of sound frequencies.

  7. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  8. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  9. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  10. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  11. Measurement of the thermal diffusivity and speed of sound of hydrothermal solutions via the laser-induced grating technique

    International Nuclear Information System (INIS)

    Butenhoff, T.J.

    1994-01-01

    Hydrothermal processing is being developed as a method for organic destruction for the Hanford Site in Washington. Hydrothermal processing refers to the redox reactions of chemical compounds in supercritical or near-supercritical aqueous solutions. In order to design reactors for the hydrothermal treatment of complicated mixtures found in the Hanford wastes, engineers need to know the thermophysical properties of the solutions under hydrothermal conditions. The author used the laser-induced grating technique to measure the thermal diffusivity and speed of sound of hydrothermal solutions. In this non-invasive optical technique, a transient grating is produced in the hydrothermal solution by optical absorption from two crossed time-coincident nanosecond laser pulses. The grating is probed by measuring the diffraction efficiency of a third laser beam. The grating relaxes via thermal diffusion, and the thermal diffusivity can be determined by measuring the decay of the grating diffraction efficiency as a function of the pump-probe delay time. In addition, intense pump pulses produce counterpropagating acoustic waves that appear as large undulations in the transient grating decay spectrum. The speed of sound in the sample is simply the grating fringe spacing divided by the undulation period. The cell is made from a commercial high pressure fitting and is equipped with two diamond windows for optical access. Results are presented for dilute dye/water solutions with T = 400 C and pressures between 20 and 70 MPa

  12. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    Science.gov (United States)

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  13. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  14. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  15. On cloud ice induced absorption and polarisation effects in microwave limb sounding

    Directory of Open Access Journals (Sweden)

    P. Eriksson

    2011-06-01

    Full Text Available Microwave limb sounding in the presence of ice clouds was studied by detailed simulations, where clouds and other atmospheric variables varied in three dimensions and the full polarisation state was considered. Scattering particles were assumed to be horizontally aligned oblate spheroids with a size distribution parameterized in terms of temperature and ice water content. A general finding was that particle absorption is significant for limb sounding, which is in contrast to the down-looking case, where it is usually insignificant. Another general finding was that single scattering can be assumed for cloud optical paths below about 0.1, which is thus an important threshold with respect to the complexity and accuracy of retrieval algorithms. The representation of particle sizes during the retrieval is also discussed. Concerning polarisation, specific findings were as follows: Firstly, no significant degree of circular polarisation was found for the considered particle type. Secondly, for the ±45° polarisation components, differences of up to 4 K in brightness temperature were found, but differences were much smaller when single scattering conditions applied. Thirdly, the vertically polarised component has the smallest cloud extinction. An important goal of the study was to derive recommendations for future limb sounding instruments, particularly concerning their polarisation setup. If ice water content is among the retrieval targets (and not just trace gas mixing ratios, then the simulations show that it should be best to observe any of the ±45° and circularly polarised components. These pairs of orthogonal components also make it easier to combine information measured from different positions and with different polarisations.

  16. Science Education Using a Computer Model-Virtual Puget Sound

    Science.gov (United States)

    Fruland, R.; Winn, W.; Oppenheimer, P.; Stahr, F.; Sarason, C.

    2002-12-01

    We created an interactive learning environment based on an oceanographic computer model of Puget Sound-Virtual Puget Sound (VPS)-as an alternative to traditional teaching methods. Students immersed in this navigable 3-D virtual environment observed tidal movements and salinity changes, and performed tracer and buoyancy experiments. Scientific concepts were embedded in a goal-based scenario to locate a new sewage outfall in Puget Sound. Traditional science teaching methods focus on distilled representations of agreed-upon knowledge removed from real-world context and scientific debate. Our strategy leverages students' natural interest in their environment, provides meaningful context and engages students in scientific debate and knowledge creation. Results show that VPS provides a powerful learning environment, but highlights the need for research on how to most effectively represent concepts and organize interactions to support scientific inquiry and understanding. Research is also needed to ensure that new technologies and visualizations do not foster misconceptions, including the impression that the model represents reality rather than being a useful tool. In this presentation we review results from prior work with VPS and outline new work for a modeling partnership recently formed with funding from the National Ocean Partnership Program (NOPP).

  17. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  18. Application of porous material to reduce aerodynamic sound from bluff bodies

    International Nuclear Information System (INIS)

    Sueki, Takeshi; Takaishi, Takehisa; Ikeda, Mitsuru; Arai, Norio

    2010-01-01

    Aerodynamic sound derived from bluff bodies can be considerably reduced by flow control. In this paper, the authors propose a new method in which porous material covers a body surface as one of the flow control methods. From wind tunnel tests on flows around a bare cylinder and a cylinder with porous material, it has been clarified that the application of porous materials is effective in reducing aerodynamic sound. Correlation between aerodynamic sound and aerodynamic force fluctuation, and a surface pressure distribution of cylinders are measured to investigate a mechanism of aerodynamic sound reduction. As a result, the correlation between aerodynamic sound and aerodynamic force fluctuation exists in the flow around the bare cylinder and disappears in the flow around the cylinder with porous material. Moreover, the aerodynamic force fluctuation of the cylinder with porous material is less than that of the bare cylinder. The surface pressure distribution of the cylinder with porous material is quite different from that of the bare cylinder. These facts indicate that aerodynamic sound is reduced by suppressing the motion of vortices because aerodynamic sound is induced by the unstable motion of vortices. In addition, an instantaneous flow field in the wake of the cylinder is measured by application of the PIV technique. Vortices that are shed alternately from the bare cylinder disappear by application of porous material, and the region of zero velocity spreads widely behind the cylinder with porous material. Shear layers between the stationary region and the uniform flow become thin and stable. These results suggest that porous material mainly affects the flow field adjacent to bluff bodies and reduces aerodynamic sound by depriving momentum of the wake and suppressing the unsteady motion of vortices. (invited paper)

  19. Comparison of three flaw-location methods for automated ultrasonic testing

    International Nuclear Information System (INIS)

    Seiger, H.

    1982-01-01

    Two well-known methods for locating flaws by measurement of the transit time of ultrasonic pulses are examined theoretically. It is shown that neither is sufficiently reliable for use in automated ultrasonic testing. A third method, which takes into account the shape of the sound field from the probe and the uncertainty in measurement of probe-flaw distance and probe position, is introduced. An experimental comparison of the three methods indicates that use of the advanced method results in more accurate location of flaws. (author)

  20. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  1. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  2. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  3. Depth study of insular shelf electric sounding in the Las Mercedes anomaly (Tacuarembo)

    International Nuclear Information System (INIS)

    Cicalese, H.

    1983-01-01

    In the framework of Uranium prospecting Programme the geophysics team composed by BRGM and DINAMIGE workers were carried out an study about insular shelf electric sounding on the Mercedes area.They were studied the following topics: geographical location, geologic framework, methods, materials and some results

  4. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  5. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  6. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  7. Active equalisation of the sound field in an extended region of a room

    DEFF Research Database (Denmark)

    Orozco-Santillán, Arturo

    1997-01-01

    studied by means of an idealised frequency domain model. The analysis is based on the calculation of the complex source strengths that minimise the difference between the actual sound pressure and the desired sound pressure in the listening area. Results in relation to the position of the sources......, the frequency range, and the size and location of the listening area are presented. However, the frequency-domain approach results in non-causal impulse responses that can be realised only at the expense of a delay. Therefore, this analysis is supplemented with a study of the equalisation carried out...

  8. Topological acoustic polaritons: robust sound manipulation at the subwavelength scale

    International Nuclear Information System (INIS)

    Yves, Simon; Fleury, Romain; Lemoult, Fabrice; Fink, Mathias; Lerosey, Geoffroy

    2017-01-01

    Topological insulators, a hallmark of condensed matter physics, have recently reached the classical realm of acoustic waves. A remarkable property of time-reversal invariant topological insulators is the presence of unidirectional spin-polarized propagation along their edges, a property that could lead to a wealth of new opportunities in the ability to guide and manipulate sound. Here, we demonstrate and study the possibility to induce topologically non-trivial acoustic states at the deep subwavelength scale, in a structured two-dimensional metamaterial composed of Helmholtz resonators. Radically different from previous designs based on non-resonant sonic crystals, our proposal enables robust sound manipulation on a surface along predefined, subwavelength pathways of arbitrary shapes. (paper)

  9. Topological acoustic polaritons: robust sound manipulation at the subwavelength scale

    Science.gov (United States)

    Yves, Simon; Fleury, Romain; Lemoult, Fabrice; Fink, Mathias; Lerosey, Geoffroy

    2017-07-01

    Topological insulators, a hallmark of condensed matter physics, have recently reached the classical realm of acoustic waves. A remarkable property of time-reversal invariant topological insulators is the presence of unidirectional spin-polarized propagation along their edges, a property that could lead to a wealth of new opportunities in the ability to guide and manipulate sound. Here, we demonstrate and study the possibility to induce topologically non-trivial acoustic states at the deep subwavelength scale, in a structured two-dimensional metamaterial composed of Helmholtz resonators. Radically different from previous designs based on non-resonant sonic crystals, our proposal enables robust sound manipulation on a surface along predefined, subwavelength pathways of arbitrary shapes.

  10. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  11. Competing sound sources reveal spatial effects in cortical processing.

    Directory of Open Access Journals (Sweden)

    Ross K Maddox

    Full Text Available Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.

  12. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  13. Reflector construction by sound path curves - A method of manual reflector evaluation in the field

    International Nuclear Information System (INIS)

    Siciliano, F.; Heumuller, R.

    1985-01-01

    In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points

  14. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  15. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  16. Long Range Sound Propagation over Sea: Application to Wind Turbine Noise

    Energy Technology Data Exchange (ETDEWEB)

    Boue, Matieu

    2007-12-13

    The classical theory of spherical wave propagation is not valid at large distances from a sound source due to the influence of wind and temperature gradients that refract, i.e., bend the sound waves. This will in the downwind direction lead to a cylindrical type of wave spreading for large distances (> 1 km). Cylindrical spreading will give a smaller damping with distance as compared to spherical spreading (3 dB/distance doubling instead of 6 dB). But over areas with soft ground, i.e., grass land, the effect of ground reflections will increase the damping so that, if the effect of atmospheric damping is removed, a behavior close to a free field spherical spreading often is observed. This is the standard assumption used in most national recommendations for predicting outdoor sound propagation, e.g., noise from wind turbines. Over areas with hard surfaces, e.g., desserts or the sea, the effect of ground damping is small and therefore cylindrical propagation could be expected in the downwind direction. This observation backed by a limited number of measurements is the background for the Swedish recommendation, which suggests that cylindrical wave spreading should be assumed for distances larger than 200 m for sea based wind turbines. The purpose of this work was to develop measurement procedures for long range sound transmission and to apply this to investigate the occurrence of cylindrical wave spreading in the Baltic Sea. This work has been successfully finished and is described in this report. Another ambition was to develop models for long range sound transmission based on the parabolic equation. Here the work is not finished but must be continued in another project. Long term measurements were performed in the Kalmar strait, Sweden, located between the mainland and Oeland, during 2005 and 2006. Two different directive sound sources placed on a lighthouse in the middle of the strait produced low frequency tones at 80, 200 and 400 Hz. At the reception point on

  17. Observations of volcanic plumes using small balloon soundings

    Science.gov (United States)

    Voemel, H.

    2015-12-01

    Eruptions of volcanoes are very difficult to predict and for practical purposes may occur at any time. Any observing system intending to observe volcanic eruptions has to be ready at any time. Due to transport time scales, emissions of large volcanic eruptions, in particular injections into the stratosphere, may be detected at locations far from the volcano within days to weeks after the eruption. These emissions may be observed using small balloon soundings at dedicated sites. Here we present observations of particles of the Icelandic Grimsvotn eruption at the Meteorological Observatory Lindenberg, Germany in the months following the eruption and observations of opportunity of other volcanic particle events. We also present observations of the emissions of SO2 from the Turrialba volcano at San Jose, Costa Rica. We argue that dedicated sites for routine observations of the clean and perturbed atmosphere using small sounding balloons are an important element in the detection and quantification of emissions from future volcanic eruptions.

  18. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  19. Estimation of probability of coastal flooding: A case study in the Norton Sound, Alaska

    Science.gov (United States)

    Kim, S.; Chapman, R. S.; Jensen, R. E.; Azleton, M. T.; Eisses, K. J.

    2010-12-01

    Along the Norton Sound, Alaska, coastal communities have been exposed to flooding induced by the extra-tropical storms. Lack of observation data especially with long-term variability makes it difficult to assess the probability of coastal flooding critical in planning for development and evacuation of the coastal communities. We estimated the probability of coastal flooding with the help of an existing storm surge model using ADCIRC and a wave model using WAM for the Western Alaska which includes the Norton Sound as well as the adjacent Bering Sea and Chukchi Sea. The surface pressure and winds as well as ice coverage was analyzed and put in a gridded format with 3 hour interval over the entire Alaskan Shelf by Ocean Weather Inc. (OWI) for the period between 1985 and 2009. The OWI also analyzed the surface conditions for the storm events over the 31 year time period between 1954 and 1984. The correlation between water levels recorded by NOAA tide gage and local meteorological conditions at Nome between 1992 and 2005 suggested strong local winds with prevailing Southerly components period are good proxies for high water events. We also selected heuristically the local winds with prevailing Westerly components at Shaktoolik which locates at the eastern end of the Norton Sound provided extra selection of flood events during the continuous meteorological data record between 1985 and 2009. The frequency analyses were performed using the simulated water levels and wave heights for the 56 year time period between 1954 and 2009. Different methods of estimating return periods were compared including the method according to FEMA guideline, the extreme value statistics, and fitting to the statistical distributions such as Weibull and Gumbel. The estimates are similar as expected but with a variation.

  20. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  1. Noise-Induced Hearing Loss

    Science.gov (United States)

    ... Home » Health Info » Hearing, Ear Infections, and Deafness Noise-Induced Hearing Loss On this page: What is ... I find additional information about NIHL? What is noise-induced hearing loss? Every day, we experience sound ...

  2. Effect of laser induced plasma ignition timing and location on Diesel spray combustion

    International Nuclear Information System (INIS)

    Pastor, José V.; García-Oliver, José M.; García, Antonio; Pinotti, Mattia

    2017-01-01

    Highlights: • Laser plasma ignition is applied to a direct injection Diesel spray, compared with auto-ignition. • Critical local fuel/air ratio for LIP provoked ignition is obtained. • The LIP system is able to stabilize Diesel combustion compared to auto-ignition cases. • Varying LIP position along spray axis directly affects Ignition-delay. • Premixed combustion is reduced both by varying position and delay of the LIP ignition system. - Abstract: An experimental study about the influence of the local conditions at the ignition location on combustion development of a direct injection spray is carried out in an optical engine. A laser induced plasma ignition system has been used to force the spray ignition, allowing comparison of combustion’s evolution and stability with the case of conventional autoignition on the Diesel fuel in terms of ignition delay, rate of heat release, spray penetration and soot location evolution. The local equivalence ratio variation along the spray axis during the injection process was determined with a 1D spray model, previously calibrated and validated. Upper equivalence ratios limits for the ignition event of a direct injected Diesel spray, both in terms of ignition success possibilities and stability of the phenomena, could been determined thanks to application of the laser plasma ignition system. In all laser plasma induced ignition cases, heat release was found to be higher than for the autoignition reference cases, and it was found to be linked to a decrease of ignition delay, with the premixed peak in the rate of heat release curve progressively disappearing as the ignition delay time gets shorter. Ignition delay has been analyzed as a function of the laser position, too. It was found that ignition delay increases for plasma positions closer to the nozzle, indicating that the amount of energy introduced by the laser induced plasma is not the only parameter affecting combustion initiation, but local equivalence ratio

  3. Evaluation of substitution monopole models for tire noise sound synthesis

    Science.gov (United States)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  4. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  5. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  6. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  7. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  8. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  9. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  10. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  11. Vibrometry Assessment of the External Thermal Composite Insulation Systems Influence on the Façade Airborne Sound Insulation

    Directory of Open Access Journals (Sweden)

    Daniel Urbán

    2018-05-01

    Full Text Available This paper verifies the impact of the use of an external thermal composite system (ETICS on air-borne sound insulation. For optimum accuracy over a wide frequency range, classical microphone based transmission measurements are combined with accelerometer based vibrometry measurements. Consistency is found between structural resonance frequencies and bending wave velocity dispersion curves determined by vibrometry on the one hand and spectral features of the sound reduction index, the ETICS mass-spring-mass resonance induced dip in the acoustic insulation spectrum, and the coincidence induced dip on the other hand. Scanning vibrometry proves to be an effective tool for structural assessment in the design phase of ETICS systems. The measured spectra are obtained with high resolution in wide frequency range, and yield sound insulation values are not affected by the room acoustic features of the laboratory transmission rooms. The complementarity between the microphone and accelerometer based results allows assessing the effect of ETICS on the sound insulation spectrum in an extended frequency range from 20 Hz to 10 kHz. The modified engineering ΔR prediction model for frequency range up to coincidence frequency of external plaster layer is recommended. Values for the sound reduction index obtained by a modified prediction method are consistent with the measured data.

  12. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  13. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  14. Depth study of insular shelf electric sounding in the Puntas de Abrojal anomaly

    International Nuclear Information System (INIS)

    Cicalese, H.

    1983-01-01

    In the framework of the Uranium prospecting Programme, a geophysics team composed by BRGM and DINAMIGE workers were carried out an study about of insular shelf electric sounding on the Puntas de Abrojal area.A geographical location, geologic framework, geophysical survey and methods, materials and results were given

  15. Results of time-domain electromagnetic soundings in Everglades National Park, Florida

    Science.gov (United States)

    Fitterman, D.V.; Deszcz-Pan, Maria; Stoddard, C.E.

    1999-01-01

    This report describes the collection, processing, and interpretation of time-domain electromagnetic soundings from Everglades National Park. The results are used to locate the extent of seawater intrusion in the Biscayne aquifer and to map the base of the Biscayne aquifer in regions where well coverage is sparse. The data show no evidence of fresh, ground-water flows at depth into Florida Bay.

  16. Impact of GPS Radio Occultation Refractivity Soundings on a Simulation of Typhoon Bilis (2006 upon Landfall

    Directory of Open Access Journals (Sweden)

    Mien-Tze Kueh

    2009-01-01

    Full Text Available Typhoon Bilis which struck Taiwan in July 2006 was chosen to assess the potential impact of GPS radio occultation (RO refractivity soundings on numerical simulation using the WRF model. We found that this case elucidates the impact of the limited GPS RO soundings on typhoon prediction due to their favorable locations. In addition, on top of available precipitable water (PW and near-surface wind speed from SSM/I data, we have also explored their combined impacts on model prediction.

  17. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  18. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  19. Habitat-induced degradation of sound signals: Quantifying the effects of communication sounds and bird location on blur ratio, excess attenuation, and signal-to-noise ratio in blackbird song

    DEFF Research Database (Denmark)

    Dabelsteen, T.; Larsen, O N; Pedersen, Simon Boel

    1993-01-01

    measures were calculated from changes of the amplitude functions (i.e., envelopes) of the degraded songs using a new technique which allowed a compensation for the contribution of the background noise to the amplitude values. Representative songs were broadcast in a deciduous forest without leaves......The habitat-induced degradation of the full song of the blackbird (Turdus merula) was quantified by measuring excess attenuation, reduction of the signal-to-noise ratio, and blur ratio, the latter measure representing the degree of blurring of amplitude and frequency patterns over time. All three...

  20. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  1. Multiple target sound quality balance for hybrid electric powertrain noise

    Science.gov (United States)

    Mosquera-Sánchez, J. A.; Sarrazin, M.; Janssens, K.; de Oliveira, L. P. R.; Desmet, W.

    2018-01-01

    The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the wider bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. This paper presents a framework for enhancing the sound quality (SQ) of the hybrid electric powertrain noise perceived inside the passenger compartment. Compared with current active sound quality control (ASQC) schemes, where the SQ improvement is just an effect of the control actions, the proposed technique features an optimization stage, which enables the NVH specialist to actively implement the amplitude balance of the tones that better fits into the auditory expectations. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent optimization analysis, which, eventually, drives the control design method. Thus, multichannel active sound profiling systems that feature cross-channel compensation schemes are guided by the multi-objective optimization stage, by means of optimal sets of amplitude gain factors that can be implemented at each single sensor location, while minimizing cross-channel effects that can either degrade the original SQ condition, or even hinder the implementation of independent SQ targets. The proposed framework is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ enhancement for multiple locations within a scaled vehicle mock-up. The results show total success rates in excess of 90%, which indicate that the proposed method is promising, not only for the improvement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.

  2. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. Host location by ichneumonid parasitoids is associated with nest dimensions of the host bee species.

    Science.gov (United States)

    Flores-Prado, L; Niemeyer, H M

    2012-08-01

    Parasitoid fitness depends on the ability of females to locate a host. In some species of Ichneumonoidea, female parasitoids detect potential hosts through vibratory cues emanating from them or through vibrational sounding produced by antennal tapping on the substrate. In this study, we (1) describe host location behaviors in Grotea gayi Spinola (Hymenoptera: Ichneumonidae) and Labena sp. on nests of Manuelia postica Spinola (Hymenoptera: Apidae), (2) compare nest dimensions between parasitized and unparasitized nests, (3) correlate the length of M. postica nests with the number of immature individuals developing, and (4) establish the relative proportion of parasitized nests along the breeding period of M. postica. Based on our results, we propose that these parasitoids use vibrational sounding as a host location mechanism and that they are able to assess host nest dimensions and choose those which may provide them with a higher fitness. Finally, we discuss an ancestral host-parasitoid relationship between Manuelia and ichneumonid species.

  4. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  5. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    Science.gov (United States)

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  6. Effects of anthropogenic sound on digging behavior, metabolism, Ca2+/Mg2+ ATPase activity, and metabolism-related gene expression of the bivalve Sinonovacula constricta

    Science.gov (United States)

    Peng, Chao; Zhao, Xinguo; Liu, Saixi; Shi, Wei; Han, Yu; Guo, Cheng; Jiang, Jingang; Wan, Haibo; Shen, Tiedong; Liu, Guangxu

    2016-01-01

    Anthropogenic sound has increased significantly in the past decade. However, only a few studies to date have investigated its effects on marine bivalves, with little known about the underlying physiological and molecular mechanisms. In the present study, the effects of different types, frequencies, and intensities of anthropogenic sounds on the digging behavior of razor clams (Sinonovacula constricta) were investigated. The results showed that variations in sound intensity induced deeper digging. Furthermore, anthropogenic sound exposure led to an alteration in the O:N ratios and the expression of ten metabolism-related genes from the glycolysis, fatty acid biosynthesis, tryptophan metabolism, and Tricarboxylic Acid Cycle (TCA cycle) pathways. Expression of all genes under investigation was induced upon exposure to anthropogenic sound at ~80 dB re 1 μPa and repressed at ~100 dB re 1 μPa sound. In addition, the activity of Ca2+/Mg2+-ATPase in the feet tissues, which is directly related to muscular contraction and subsequently to digging behavior, was also found to be affected by anthropogenic sound intensity. The findings suggest that sound may be perceived by bivalves as changes in the water particle motion and lead to the subsequent reactions detected in razor clams. PMID:27063002

  7. Effects of anthropogenic sound on digging behavior, metabolism, Ca(2+)/Mg(2+) ATPase activity, and metabolism-related gene expression of the bivalve Sinonovacula constricta.

    Science.gov (United States)

    Peng, Chao; Zhao, Xinguo; Liu, Saixi; Shi, Wei; Han, Yu; Guo, Cheng; Jiang, Jingang; Wan, Haibo; Shen, Tiedong; Liu, Guangxu

    2016-04-11

    Anthropogenic sound has increased significantly in the past decade. However, only a few studies to date have investigated its effects on marine bivalves, with little known about the underlying physiological and molecular mechanisms. In the present study, the effects of different types, frequencies, and intensities of anthropogenic sounds on the digging behavior of razor clams (Sinonovacula constricta) were investigated. The results showed that variations in sound intensity induced deeper digging. Furthermore, anthropogenic sound exposure led to an alteration in the O:N ratios and the expression of ten metabolism-related genes from the glycolysis, fatty acid biosynthesis, tryptophan metabolism, and Tricarboxylic Acid Cycle (TCA cycle) pathways. Expression of all genes under investigation was induced upon exposure to anthropogenic sound at ~80 dB re 1 μPa and repressed at ~100 dB re 1 μPa sound. In addition, the activity of Ca(2+)/Mg(2+)-ATPase in the feet tissues, which is directly related to muscular contraction and subsequently to digging behavior, was also found to be affected by anthropogenic sound intensity. The findings suggest that sound may be perceived by bivalves as changes in the water particle motion and lead to the subsequent reactions detected in razor clams.

  8. Effects of anthropogenic sound on digging behavior, metabolism, Ca2+/Mg2+ ATPase activity, and metabolism-related gene expression of the bivalve Sinonovacula constricta

    Science.gov (United States)

    Peng, Chao; Zhao, Xinguo; Liu, Saixi; Shi, Wei; Han, Yu; Guo, Cheng; Jiang, Jingang; Wan, Haibo; Shen, Tiedong; Liu, Guangxu

    2016-04-01

    Anthropogenic sound has increased significantly in the past decade. However, only a few studies to date have investigated its effects on marine bivalves, with little known about the underlying physiological and molecular mechanisms. In the present study, the effects of different types, frequencies, and intensities of anthropogenic sounds on the digging behavior of razor clams (Sinonovacula constricta) were investigated. The results showed that variations in sound intensity induced deeper digging. Furthermore, anthropogenic sound exposure led to an alteration in the O:N ratios and the expression of ten metabolism-related genes from the glycolysis, fatty acid biosynthesis, tryptophan metabolism, and Tricarboxylic Acid Cycle (TCA cycle) pathways. Expression of all genes under investigation was induced upon exposure to anthropogenic sound at ~80 dB re 1 μPa and repressed at ~100 dB re 1 μPa sound. In addition, the activity of Ca2+/Mg2+-ATPase in the feet tissues, which is directly related to muscular contraction and subsequently to digging behavior, was also found to be affected by anthropogenic sound intensity. The findings suggest that sound may be perceived by bivalves as changes in the water particle motion and lead to the subsequent reactions detected in razor clams.

  9. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  10. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  11. Exposure to excessive sounds and hearing status in academic classical music students

    Directory of Open Access Journals (Sweden)

    Małgorzata Pawlaczyk-Łuszczyńska

    2017-02-01

    Full Text Available Objectives: The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Material and Methods: Standard pure-tone audiometry (PTA was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects’ musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students’ audiometric hearing threshold levels (HTLs were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. Results: It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000–8000 Hz. Furthermore, in each group HTLs in the frequency range 1000–8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI: 1.014–1.13, p < 0.05. The students’ HTLs were worse (higher than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. Conclusions: The

  12. Exposure to excessive sounds and hearing status in academic classical music students.

    Science.gov (United States)

    Pawlaczyk-Łuszczyńska, Małgorzata; Zamojska-Daniszewska, Małgorzata; Dudarewicz, Adam; Zaborowski, Kamil

    2017-02-21

    The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Standard pure-tone audiometry (PTA) was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects' musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL) were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students' audiometric hearing threshold levels (HTLs) were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000-8000 Hz. Furthermore, in each group HTLs in the frequency range 1000-8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR) of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI): 1.014-1.13, p students' HTLs were worse (higher) than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. The results confirm the need for further studies and development of a hearing conservation program for

  13. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  14. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  15. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  16. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  17. Directional loudness in an anechoic sound field, head-related transfer functions, and binaural summation

    DEFF Research Database (Denmark)

    Sivonen, Ville Pekka; Ellermeier, Wolfgang

    2006-01-01

    planes. Matches were obtained via a two-interval, adaptive forced-choice (2AFC) procedure for three center frequencies (0.4, 1 and 5 kHz) and two overall levels (45 and 65 dB SPL). The results showed that loudness is not constant over sound incidence angles, with directional sensitivity varying over......The effect of sound incidence angle on loudness was investigated using real sound sources positioned in an anechoic chamber. Eight normal-hearing listeners produced loudness matches between a frontal reference location and seven sources placed at other directions, both in the horizontal and median...... a range of up to 10 dB, exhibiting considerable frequency dependence, but only minor effects of overall level. The pattern of results varied substantially between subjects, but was largely accounted for by variations in individual head-related transfer functions. Modeling of binaural loudness based...

  18. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  19. Hemispheric lateralization in an analysis of speech sounds. Left hemisphere dominance replicated in Japanese subjects.

    Science.gov (United States)

    Koyama, S; Gunji, A; Yabe, H; Oiwa, S; Akahane-Yamada, R; Kakigi, R; Näätänen, R

    2000-09-01

    Evoked magnetic responses to speech sounds [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M. Vainio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.] were recorded from 13 Japanese subjects (right-handed). Infrequently presented vowels ([o]) among repetitive vowels ([e]) elicited the magnetic counterpart of mismatch negativity, MMNm (Bilateral, nine subjects; Left hemisphere alone, three subjects; Right hemisphere alone, one subject). The estimated source of the MMNm was stronger in the left than in the right auditory cortex. The sources were located posteriorly in the left than in the right auditory cortex. These findings are consistent with the results obtained in Finnish [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M.Vainio, P.Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.][T. Rinne, K. Alho, P. Alku, M. Holi, J. Sinkkonen, J. Virtanen, O. Bertrand and R. Näätänen, Analysis of speech sounds is left-hemisphere predominant at 100-150 ms after sound onset. Neuroreport, 10 (1999) 1113-1117.] and English [K. Alho, J.F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects. Instead of the P1m observed in Finnish [M. Tervaniemi, A. Kujala, K. Alho, J. Virtanen, R.J. Ilmoniemi and R. Näätänen, Functional specialization of the human auditory cortex in processing phonetic and musical sounds: A magnetoencephalographic (MEG) study. Neuroimage, 9 (1999) 330-336.] and English [K. Alho, J. F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko

  20. Linear theory of sound waves with evaporation and condensation

    International Nuclear Information System (INIS)

    Inaba, Masashi; Watanabe, Masao; Yano, Takeru

    2012-01-01

    An asymptotic analysis of a boundary-value problem of the Boltzmann equation for small Knudsen number is carried out for the case when an unsteady flow of polyatomic vapour induces reciprocal evaporation and condensation at the interface between the vapour and its liquid phase. The polyatomic version of the Boltzmann equation of the ellipsoidal statistical Bhatnagar–Gross–Krook (ES-BGK) model is used and the asymptotic expansions for small Knudsen numbers are applied on the assumptions that the Mach number is sufficiently small compared with the Knudsen number and the characteristic length scale divided by the characteristic time scale is comparable with the speed of sound in a reference state, as in the case of sound waves. In the leading order of approximation, we derive a set of the linearized Euler equations for the entire flow field and a set of the boundary-layer equations near the boundaries (the vapour–liquid interface and simple solid boundary). The boundary conditions for the Euler and boundary-layer equations are obtained at the same time when the solutions of the Knudsen layers on the boundaries are determined. The slip coefficients in the boundary conditions are evaluated for water vapour. A simple example of the standing sound wave in water vapour bounded by a liquid water film and an oscillating piston is demonstrated and the effect of evaporation and condensation on the sound wave is discussed. (paper)

  1. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  2. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  3. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  4. Spatial avoidance to experimental increase of intermittent and continuous sound in two captive harbour porpoises.

    Science.gov (United States)

    Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans

    2018-02-01

    The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  6. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    The French Agency for Food, Environmental and Occupational Health and Safety (ANSES) reiterates that wind turbines emit infra-sounds (sound below 20 Hz) and low-frequency sounds. There are also other sources of infra-sound emissions that can be natural (wind in particular) or anthropogenic (heavy-goods vehicles, heat pumps, etc.). The measurement campaigns undertaken during the expert appraisal enabled these emissions from three wind farms to be characterised. In general, only very high intensities of infra-sound can be heard or perceived by humans. At the minimum distance (of 500 metres) separating homes from wind farm sites set out by the regulations, the infra-sounds produced by wind turbines do not exceed hearing thresholds. Therefore, the disturbance related to audible noise potentially felt by people around wind farms mainly relates to frequencies above 50 Hz. The expert appraisal showed that mechanisms for health effects grouped under the term 'vibro-acoustic disease', reported in certain publications, have no serious scientific basis. There have been very few scientific studies on the potential health effects of infra-sounds and low frequencies produced by wind turbines. The review of these experimental and epidemiological data did not find any adequate scientific arguments for the occurrence of health effects related to exposure to noise from wind turbines, other than disturbance related to audible noise and a nocebo effect, which can help explain the occurrence of stress-related symptoms experienced by residents living near wind farms. However, recently acquired knowledge on the physiology of the cochlea-vestibular system has revealed physiological effects in animals induced by exposure to high-intensity infra-sounds. These effects, while plausible in humans, have yet to be demonstrated for exposure to levels comparable to those observed in residents living near wind farms. Moreover, the connection between these physiological effects and the occurrence of

  7. Management implications of broadband sound in modulating wild silver carp (Hypophthalmichthys molitrix) behavior

    Science.gov (United States)

    Vetter, Brooke J.; Calfee, Robin D.; Mensinger, Allen F.

    2017-01-01

    Invasive silver carp (Hypophthalmichthys molitrix) dominate large regions of the Mississippi River drainage, outcompete native species, and are notorious for their prolific and unusual jumping behavior. High densities of juvenile and adult (~25 kg) carp are known to jump up to 3 m above the water surface in response to moving watercraft. Broadband sound recorded from an outboard motor (100 hp at 32 km/hr) can modulate their behavior in captivity; however, the response of wild silver carp to broadband sound has yet to be determined. In this experiment, broadband sound (0.06–10 kHz) elicited jumping behavior from silver carp in the Spoon River near Havana, IL independent of boat movement, indicating acoustic stimulus alone is sufficient to induce jumping. Furthermore, the number of jumping fish decreased with subsequent sound exposures. Understanding silver carp jumping is not only important from a behavioral standpoint, it is also critical to determine effective techniques for controlling this harmful species, such as herding fish into a net for removal.

  8. Investigation of fourth sound propagation in HeII in the presence of superflow

    International Nuclear Information System (INIS)

    Andrei, Y.E.

    1980-01-01

    The temperature dependence of a superflow-induced downshift of the fourth sound velocity in HeII confined in various restrictive media was measured. We found that the magnitude of the downshift strongly depends on the restrictive medium, whereas the temperature dependence is universal. The results are interpreted in terms of local superflow velocities approaching the Landau critical velocity. This model provides and understanding of the nature of the downshift and correctly predicts temperature dependence. The results show that the Landau excitation model, even when used at high velocities, where interactions between elementary excitations are substantial, hield good agreement with experiment when a first order correction is introduced to account for these interactions. In a separate series of experiments, fourth sound-like propagation in HeII in a grafoil-filled resonator was observed. The sound velocity was found to be more than an order of magnitude smaller than that of ordinary fourth sound. This significant reduction is explained in terms of a model in which the pore structure in grafoil is pictured as an ensemble of coupled Helmholz resonators

  9. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  10. Sounding the Alert: Designing an Effective Voice for Earthquake Early Warning

    Science.gov (United States)

    Burkett, E. R.; Given, D. D.

    2015-12-01

    The USGS is working with partners to develop the ShakeAlert Earthquake Early Warning (EEW) system (http://pubs.usgs.gov/fs/2014/3083/) to protect life and property along the U.S. West Coast, where the highest national seismic hazard is concentrated. EEW sends an alert that shaking from an earthquake is on its way (in seconds to tens of seconds) to allow recipients or automated systems to take appropriate actions at their location to protect themselves and/or sensitive equipment. ShakeAlert is transitioning toward a production prototype phase in which test users might begin testing applications of the technology. While a subset of uses will be automated (e.g., opening fire house doors), other applications will alert individuals by radio or cellphone notifications and require behavioral decisions to protect themselves (e.g., "Drop, Cover, Hold On"). The project needs to select and move forward with a consistent alert sound to be widely and quickly recognized as an earthquake alert. In this study we combine EEW science and capabilities with an understanding of human behavior from the social and psychological sciences to provide insight toward the design of effective sounds to help best motivate proper action by alert recipients. We present a review of existing research and literature, compiled as considerations and recommendations for alert sound characteristics optimized for EEW. We do not yet address wording of an audible message about the earthquake (e.g., intensity and timing until arrival of shaking or possible actions), although it will be a future component to accompany the sound. We consider pitch(es), loudness, rhythm, tempo, duration, and harmony. Important behavioral responses to sound to take into account include that people respond to discordant sounds with anxiety, can be calmed by harmony and softness, and are innately alerted by loud and abrupt sounds, although levels high enough to be auditory stressors can negatively impact human judgment.

  11. Demonstration of slow sound propagation and acoustic transparency with a series of detuned resonators

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Bozhevolnyi, Sergey I.

    2014-01-01

    We present experimental results demonstrating the phenomenon of acoustic transparency with a significant slowdown of sound propagation realized with a series of paired detuned acoustic resonators (DAR) side-attached to a waveguide. The phenomenon mimics the electromagnetically induced transparency...... than 20 dB on both sides of the transparency window, and we quantify directly (using a pulse propagation) the acoustic slowdown effect, resulting in the sound group velocity of 9.8 m/s (i.e. in the group refractive index of 35). We find very similar values of the group refractive index by using...

  12. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  13. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    Science.gov (United States)

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  14. Seafloor environments in the Long Island Sound estuarine system

    Science.gov (United States)

    Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.

    1999-01-01

    Four categories of modern seafloor sedimentary environments have been identified and mapped across the large, glaciated, topographically complex Long Island Sound estuary by means of an extensive regional set of sidescan sonographs, bottom samples, and video-camera observations and supplemental marine-geologic and modeled physical-oceanographic data. (1) Environments of erosion or nondeposition contain sediments which range from boulder fields to gravelly coarse-to-medium sands and appear on the sonographs either as patterns with isolated reflections (caused by outcrops of glacial drift and bedrock) or as patterns of strong backscatter (caused by coarse lag deposits). Areas of erosion or nondeposition were found across the rugged seafloor at the eastern entrance of the Sound and atop bathymetric highs and within constricted depressions in other parts of the basin. (2) Environments of bedload transport contain mostly coarse-to-fine sand with only small amounts of mud and are depicted by sonograph patterns of sand ribbons and sand waves. Areas of bedload transport were found primarily in the eastern Sound where bottom currents have sculptured the surface of a Holocene marine delta and are moving these sediments toward the WSW into the estuary. (3) Environments of sediment sorting and reworking comprise variable amounts of fine sand and mud and are characterized either by patterns of moderate backscatter or by patterns with patches of moderate-to-weak backscatter that reflect a combination of erosion and deposition. Areas of sediment sorting and reworking were found around the periphery of the zone of bedload transport in the eastern Sound and along the southern nearshore margin. They also are located atop low knolls, on the flanks of shoal complexes, and within segments of the axial depression in the western Sound. (4) Environments of deposition are blanketed by muds and muddy fine sands that produce patterns of uniformly weak backscatter. Depositional areas occupy

  15. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  16. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  17. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  18. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  19. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  20. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  1. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  2. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study.

    Science.gov (United States)

    Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra

    2014-07-01

    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Color improves ‘visual’ acuity via sound

    Directory of Open Access Journals (Sweden)

    Shelly eLevy-Tzedek

    2014-11-01

    Full Text Available Visual-to-auditory sensory substitution devices (SSDs convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location and color information into musical notes. We tested the 'visual' acuity of 23 individuals (13 blind and 10 blindfolded sighted on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of the letter ‘E’. The test was repeated twice: in one test, the letter ‘E’ was drawn with a single color (white, and in the other test, with two colors (red and white. In the latter case, the vertical line in the letter, when upright, was drawn in red, with the three horizontal lines drawn in white. We found no significant differences in performance between the blind and the sighted groups. We found a significant effect of the added color on the ‘visual’ acuity. The highest acuity participants reached in the monochromatic test was 20/800, whereas with the added color, acuity doubled to 20/400. We conclude that color improves 'visual' acuity via sound.

  4. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  5. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  6. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation

    Directory of Open Access Journals (Sweden)

    Anna R. Chambers

    2016-08-01

    Full Text Available Neurons at higher stages of sensory processing can partially compensate for a sudden drop in input from the periphery through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where > 95% of synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore the cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, cortical processing and perception recover despite the absence of an auditory brainstem response (ABR or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC, an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the auditory cortex (ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB of awake mice. Sound-driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory

  7. A novel method for direct localized sound speed measurement using the virtual source paradigm

    DEFF Research Database (Denmark)

    Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt

    2007-01-01

    ) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...

  8. Mercury in Sediment, Water, and Biota of Sinclair Inlet, Puget Sound, Washington, 1989-2007

    Science.gov (United States)

    Paulson, Anthony J.; Keys, Morgan E.; Scholting, Kelly L.

    2010-01-01

    Historical records of mercury contamination in dated sediment cores from Sinclair Inlet are coincidental with activities at the U.S. Navy Puget Sound Naval Shipyard; peak total mercury concentrations occurred around World War II. After World War II, better metallurgical management practices and environmental regulations reduced mercury contamination, but total mercury concentrations in surface sediment of Sinclair Inlet have decreased slowly because of the low rate of sedimentation relative to the vertical mixing within sediment. The slopes of linear regressions between the total mercury and total organic carbon concentrations of sediment offshore of Puget Sound urban areas was the best indicator of general mercury contamination above pre-industrial levels. Prior to the 2000-01 remediation, this indicator placed Sinclair Inlet in the tier of estuaries with the highest level of mercury contamination, along with Bellingham Bay in northern Puget Sound and Elliott Bay near Seattle. This indicator also suggests that the 2000/2001 remediation dredging had significant positive effect on Sinclair Inlet as a whole. In 2007, about 80 percent of the area of the Bremerton naval complex had sediment total mercury concentrations within about 0.5 milligrams per kilogram of the Sinclair Inlet regression. Three areas adjacent to the waterfront of the Bremerton naval complex have total mercury concentrations above this range and indicate a possible terrestrial source from waterfront areas of Bremerton naval complex. Total mercury concentrations in unfiltered Sinclair Inlet marine waters are about three times higher than those of central Puget Sound, but the small numbers of samples and complex physical and geochemical processes make it difficult to interpret the geographical distribution of mercury in marine waters from Sinclair Inlet. Total mercury concentrations in various biota species were compared among geographical locations and included data of composite samples, individual

  9. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  10. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  11. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  12. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  13. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  14. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  15. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  16. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia

    Directory of Open Access Journals (Sweden)

    Andreas eWidmann

    2012-03-01

    Full Text Available Dyslexic and control first grade school children were compared in a Symbol-to-Sound matching test based on a nonlinguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction elicited the N2b and P3a event-related potential (ERP components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.

  17. The Effect of Superior Semicircular Canal Dehiscence on Intracochlear Sound Pressures

    Science.gov (United States)

    Nakajima, Hideko Heidi; Pisano, Dominic V.; Merchant, Saumil N.; Rosowski, John J.

    2011-11-01

    Semicircular canal dehiscence (SCD) is a pathological opening in the bony wall of the inner ear that can result in conductive hearing loss. The hearing loss is variable across patients, and the precise mechanism and source of variability is not fully understood. We use intracochlear sound pressure measurements in cadaveric preparations to study the effects of SCD size. Simultaneous measurement of basal intracochlear sound pressures in scala vestibuli (SV) and scala tympani (ST) quantifies the complex differential pressure across the cochlear partition, the stimulus that excites the partition. Sound-induced pressures in SV and ST, as well as stapes velocity and ear-canal pressure are measured simultaneously for various sizes of SCD followed by SCD patching. At low frequencies (<600 Hz) our results show that SCD decreases the pressure in both SV and ST, as well as differential pressure, and these effects become more pronounced as dehiscence size is increased. For frequencies above 1 kHz, the smallest pinpoint dehiscence can have the larger effect on the differential pressure in some ears. These effects due to SCD are reversible by patching the dehiscence.

  18. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  19. Seeing the sound after visual loss: functional MRI in acquired auditory-visual synesthesia.

    Science.gov (United States)

    Yong, Zixin; Hsieh, Po-Jang; Milea, Dan

    2017-02-01

    Acquired auditory-visual synesthesia (AVS) is a rare neurological sign, in which specific auditory stimulation triggers visual experience. In this study, we used event-related fMRI to explore the brain regions correlated with acquired monocular sound-induced phosphenes, which occurred 2 months after unilateral visual loss due to an ischemic optic neuropathy. During the fMRI session, 1-s pure tones at various pitches were presented to the patient, who was asked to report occurrence of sound-induced phosphenes by pressing one of the two buttons (yes/no). The brain activation during phosphene-experienced trials was contrasted with non-phosphene trials and compared to results obtained in one healthy control subject who underwent the same fMRI protocol. Our results suggest, for the first time, that acquired AVS occurring after visual impairment is associated with bilateral activation of primary and secondary visual cortex, possibly due to cross-wiring between auditory and visual sensory modalities.

  20. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  1. Historical trends in the accumulation of chemicals in Puget Sound sediment

    International Nuclear Information System (INIS)

    Crecelius, E.; Cullinan, V.; Lefkovitz, L.; Pevan, C.

    1995-01-01

    As human activity in and around Puget Sound increased, so did the contaminant levels in the sediment. Sediment cores collected in 1 982 revealed inputs of chemicals to the Sound, including lead (Pb), mercury (Hg), silver (Ag), copper (Cu) and petroleum hydrocarbons, began to increase above background in the late 1800s and peaked between 1945 and 1965. Synthetic organic compounds, such as polychlorinated biphenyls (PCBs) and DDT, first appeared in sediments deposited in the 1930s and reached a maximum in the 1960s. The presence of the subsurface maximum concentrations suggests that pollution-control strategies have improved the sediment quality of central Puget Sound. Additional sediment coring was performed in 1991 and samples were collected at six locations in the main basin of Puget Sound. Sediment ages were determined using Pb 210 radio isotope dating. Sedimentation rates were approximately 1 to 2 cm/yr and deposition rates ranged from 480 to 1000 mg/cm2/yr. The contaminant level of many metals has continued to decrease steadily in the last 10 years. The mean concentration of Pb, for example, has decreased upwards of 20% during this period, with an overall drop of about 30% since its maximum concentration in the 1950s and 1960s. Hydrocarbon contamination appears to parallel that of heavy metals. Significant decrease in PCB and DDT concentrations were also observed with a 2 to 4-fold decrease in surficial sediment concentrations. Concentrations of Ag, As, Cu, Hg, Sb, and Zn have declined significantly in the last 20 years, lending support to the hypothesis that the strengthening of environmental regulations since 1970 has influenced the water quality of Puget Sound

  2. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  3. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  4. Physical processes in a coupled bay-estuary coastal system: Whitsand Bay and Plymouth Sound

    Science.gov (United States)

    Uncles, R. J.; Stephens, J. A.; Harris, C.

    2015-09-01

    Whitsand Bay and Plymouth Sound are located in the southwest of England. The Bay and Sound are separated by the ∼2-3 km-wide Rame Peninsula and connected by ∼10-20 m-deep English Channel waters. Results are presented from measurements of waves and currents, drogue tracking, surveys of salinity, temperature and turbidity during stratified and unstratified conditions, and bed sediment surveys. 2D and 3D hydrodynamic models are used to explore the generation of tidally- and wind-driven residual currents, flow separation and the formation of the Rame eddy, and the coupling between the Bay and the Sound. Tidal currents flow around the Rame Peninsula from the Sound to the Bay between approximately 3 h before to 2 h after low water and form a transport path between them that conveys lower salinity, higher turbidity waters from the Sound to the Bay. These waters are then transported into the Bay as part of the Bay-mouth limb of the Rame eddy and subsequently conveyed to the near-shore, east-going limb and re-circulated back towards Rame Head. The Simpson-Hunter stratification parameter indicates that much of the Sound and Bay are likely to stratify thermally during summer months. Temperature stratification in both is pronounced during summer and is largely determined by coastal, deeper-water stratification offshore. Small tidal stresses in the Bay are unable to move bed sediment of the observed sizes. However, the Bay and Sound are subjected to large waves that are capable of driving a substantial bed-load sediment transport. Measurements show relatively low levels of turbidity, but these respond rapidly to, and have a strong correlation with, wave height.

  5. An analysis of collegiate band directors' exposure to sound pressure levels

    Science.gov (United States)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  6. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    Science.gov (United States)

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  7. Female listeners’ autonomic responses to dramatic shifts between loud and soft music/sound passages: a study of heavy metal songs

    Directory of Open Access Journals (Sweden)

    Tzu-Han eCheng

    2016-02-01

    Full Text Available Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listener’s heart rate in response to the following music passage. These findings have potential implications for future research of the temporal dynamics of musical emotions.

  8. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  9. Effects of small variations of speed of sound in optoacoustic tomographic imaging

    International Nuclear Information System (INIS)

    Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel

    2014-01-01

    Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtained with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media

  10. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  11. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  12. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  13. Hearing loss in relation to sound exposure of professional symphony orchestra musicians

    DEFF Research Database (Denmark)

    Schmidt, J. H.; Pedersen, E. R.; Paarup, H. M.

    2014-01-01

    OBJECTIVES: The objectives of this study were to: (1) estimate the hearing status of classical symphony orchestra musicians and (2) investigate the hypothesis that occupational sound exposure of symphony orchestra musicians leads to elevated hearing thresholds. DESIGN: The study population compri...... that performing music may induce hearing loss to the same extent as industrial noise....

  14. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  15. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  16. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  17. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  18. Transient Electromagnetic Soundings Near Great Sand Dunes National Park and Preserve, San Luis Valley, Colorado (2006 Field Season)

    Science.gov (United States)

    Fitterman, David V.; de Sozua Filho, Oderson A.

    2009-01-01

    Time-domain electromagnetic (TEM) soundings were made near Great Sand Dunes National Park and Preserve in the San Luis Valley of southern Colorado to obtain subsurface information of use to hydrologic modeling. Seventeen soundings were made to the east and north of the sand dunes. Using a small loop TEM system, maximum exploration depths of about 75 to 150 m were obtained. In general, layered earth interpretations of the data found that resistivity decreases with depth. Comparison of soundings with geologic logs from nearby wells found that zones logged as having increased clay content usually corresponded with a significant resistivity decrease in the TEM determined model. This result supports the use of TEM soundings to map the location of the top of the clay unit deposited at the bottom of the ancient Lake Alamosa that filled the San Luis Valley from Pliocene to middle Pleistocene time.

  19. Cross-modal selective attention: on the difficulty of ignoring sounds at the locus of visual attention.

    Science.gov (United States)

    Spence, C; Ranson, J; Driver, J

    2000-02-01

    In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.

  20. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  1. Active control of turbulent boundary layer sound transmission into a vehicle interior

    International Nuclear Information System (INIS)

    Caiazzo, A; Alujević, N; Pluymers, B; Desmet, W

    2016-01-01

    In high speed automotive, aerospace, and railway transportation, the turbulent boundary layer (TBL) is one of the most important sources of interior noise. The stochastic pressure distribution associated with the turbulence is able to excite significantly structural vibration of vehicle exterior panels. They radiate sound into the vehicle through the interior panels. Therefore, the air flow noise becomes very influential when it comes to the noise vibration and harshness assessment of a vehicle, in particular at low frequencies. Normally, passive solutions, such as sound absorbing materials, are used for reducing the TBL-induced noise transmission into a vehicle interior, which generally improve the structure sound isolation performance. These can achieve excellent isolation performance at higher frequencies, but are unable to deal with the low-frequency interior noise components. In this paper, active control of TBL noise transmission through an acoustically coupled double panel system into a rectangular cavity is examined theoretically. The Corcos model of the TBL pressure distribution is used to model the disturbance. The disturbance is rejected by an active vibration isolation unit reacting between the exterior and the interior panels. Significant reductions of the low-frequency vibrations of the interior panel and the sound pressure in the cavity are observed. (paper)

  2. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  3. Sound as a supportive design intervention for improving health care experience in the clinical ecosystem: A qualitative study.

    Science.gov (United States)

    Iyendo, Timothy Onosahwo

    2017-11-01

    Most prior hospital noise research usually deals with sound in its noise facet and is based merely on sound level abatement, rather than as an informative or orientational element. This paper stimulates scientific research into the effect of sound interventions on physical and mental health care in the clinical environment. Data sources comprised relevant World Health Organization guidelines and the results of a literature search of ISI Web of Science, ProQuest Central, MEDLINE, PubMed, Scopus, JSTOR and Google Scholar. Noise induces stress and impedes the recovery process. Pleasant natural sound intervention which includes singing birds, gentle wind and ocean waves, revealed benefits that contribute to perceived restoration of attention and stress recovery in patients and staff. Clinicians should consider pleasant natural sounds perception as a low-risk non-pharmacological and unobtrusive intervention that should be implemented in their routine care for speedier recovery of patients undergoing medical procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  5. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  6. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  7. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  8. Pressure sound level measurements at an educational environment in Goiânia, Goiás, Brazil

    Science.gov (United States)

    Costa, J. J. L.; do Nascimento, E. O.; de Oliveira, L. N.; Caldas, L. V. E.

    2018-03-01

    In this work, 25 points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiânia, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institute, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiânia for all points.

  9. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  10. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  11. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  12. Parallel-plate third sound waveguides with fixed and variable plate spacings for the study of fifth sound in superfluid helium

    International Nuclear Information System (INIS)

    Jelatis, G.J.

    1983-01-01

    Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance

  13. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    Full Text Available Terufumi Shimoda,1 Yasushi Obase,2 Yukio Nagasaka,3 Hiroshi Nakano,1 Akiko Ishimatsu,1 Reiko Kishikawa,1 Tomoaki Iwanaga1 1Clinical Research Center, Fukuoka National Hospital, Fukuoka, 2Second Department of Internal Medicine, School of Medicine, Nagasaki University, Nagasaki, 3Kyoto Respiratory Center, Otowa Hospital, Kyoto, Japan Purpose: Airway inflammation can be detected by lung sound analysis (LSA at a single point in the posterior lower lung field. We performed LSA at 7 points to examine whether the technique could identify the location of airway inflammation in patients with asthma. Patients and methods: Breath sounds were recorded at 7 points on the body surface of 22 asthmatic subjects. Inspiration sound pressure level (ISPL, expiration sound pressure level (ESPL, and the expiration-to-inspiration sound pressure ratio (E/I were calculated in 6 frequency bands. The data were analyzed for potential correlation with spirometry, airway hyperresponsiveness (PC20, and fractional exhaled nitric oxide (FeNO. Results: The E/I data in the frequency range of 100–400 Hz (E/I low frequency [LF], E/I mid frequency [MF] were better correlated with the spirometry, PC20, and FeNO values than were the ISPL or ESPL data. The left anterior chest and left posterior lower recording positions were associated with the best correlations (forced expiratory volume in 1 second/forced vital capacity: r=–0.55 and r=–0.58; logPC20: r=–0.46 and r=–0.45; and FeNO: r=0.42 and r=0.46, respectively. The majority of asthmatic subjects with FeNO ≥70 ppb exhibited high E/I MF levels in all lung fields (excluding the trachea and V50%pred <80%, suggesting inflammation throughout the airway. Asthmatic subjects with FeNO <70 ppb showed high or low E/I MF levels depending on the recording position, indicating uneven airway inflammation. Conclusion: E/I LF and E/I MF are more useful LSA parameters for evaluating airway inflammation in bronchial asthma; 7-point lung

  14. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  15. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  16. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  17. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  20. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    Science.gov (United States)

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  2. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  3. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  4. 21 CFR 876.4590 - Interlocking urethral sound.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...

  5. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  6. Characterization of Underwater Sounds Produced by a Hydraulic Cutterhead Dredge during Maintenance Dredging in the Stockton Deepwater Shipping Channel, California

    Science.gov (United States)

    2014-03-01

    underwater sound had not been linked to dredging projects. However, concerns for negative impacts of underwater noise on aquatic species (e.g. salmon ... METHODS Study site. The Port of Stockton is a major inland deepwater port in Stockton, California, located on the San Joaquin River before it joins... of Cook Inlet, Alaska. The authors reported that ambient sound levels ranged from 95 dB in the Knik Arm to 124 dB near Point Possession on an incoming

  7. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  8. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation

    International Nuclear Information System (INIS)

    Crostack, H.A.; Pohl, K.Y.; Radtke, U.

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO 2 layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.) [de

  9. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  10. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  11. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    Science.gov (United States)

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  12. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  13. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  14. Development of the sound localization cues in cats

    Science.gov (United States)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies 10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  15. Exploring the effect of sound and music on health in hospital settings: A narrative review.

    Science.gov (United States)

    Iyendo, Timothy Onosahwo

    2016-11-01

    positive emotion, and decreasing the levels of stressful conditions. Whilst sound holds both negative and positive aspects of the hospital ecosystem and may be stressful, it also possesses a soothing quality that induces positive feelings in patients. Conceptualizing the nature of sound in the hospital context as a soundscape, rather than merely noise can permit a subtler and socially useful understanding of the role of sound and music in the hospital setting, thereby creating a means for improving the hospital experience for patients and nurses. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Identification of aquifer potential in Karanganyar city by using vertical electrical sounding method

    Science.gov (United States)

    Marfuatik, L.; Koesuma, S.; Legowo, B.; Darsono

    2018-03-01

    The identification of aquifer was done by using Vertical Electrical Sounding (VES) method. This research aims to identify potential and depth of the aquifers. The locations of surveys are at ten points,namely TS1 (Alastuwo), TS2 (Wonorejo), TS3 (Kaling), TS4 (Kaling), TS5 (Buran), TS6 (Wonolopo), TS7 (Buran), TS8 (Ngijo), TS9 (Jati), and TS10 (Suruhkalang) where all located in Karanganyar regency. The survey path is about 500-600 meters length which can penetrate current to 100 – 200 meters in depth. The measurement was done by using OYO Mc OHM-EL Model 2119C. Geoelectrical data analysis was processed using Progress version 3.0 Software. The interpretation result shows that the locations of research area are included in Lawu-volcano rock formation which is breccias, lava, and tuff as the constituents. We found that unconfined aquifer in all of locations with different depth and confined aquifer just 7 locations start from 25.04 meters.

  17. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  18. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  19. The dissimilar time course of temporary threshold shifts and reduction of inhibition in the inferior colliculus following intense sound exposure.

    Science.gov (United States)

    Heeringa, A N; van Dijk, P

    2014-06-01

    Excessive noise exposure is known to produce an auditory threshold shift, which can be permanent or transient in nature. Recent studies showed that noise-induced temporary threshold shifts are associated with loss of synaptic connections to the inner hair cells and with cochlear nerve degeneration, which is reflected in a decreased amplitude of wave I of the auditory brainstem response (ABR). This suggests that, despite normal auditory thresholds, central auditory processing may be abnormal. We recorded changes in central auditory processing following a sound-induced temporary threshold shift. Anesthetized guinea pigs were exposed for 1 h to a pure tone of 11 kHz (124 dB sound pressure level). Hearing thresholds, amplitudes of ABR waves I and IV, and spontaneous and tone-evoked firing rates in the inferior colliculus (IC) were assessed immediately, one week, two weeks, and four weeks post exposure. Hearing thresholds were elevated immediately following overexposure, but recovered within one week. The amplitude of the ABR wave I was decreased in all sound-exposed animals for all test periods. In contrast, the ABR wave IV amplitude was only decreased immediately after overexposure and recovered within a week. The proportion of IC units that show inhibitory responses to pure tones decreased substantially up to two weeks after overexposure, especially when stimulated with high frequencies. The proportion of excitatory responses to low frequencies was increased. Spontaneous activity was unaffected by the overexposure. Despite rapid normalization of auditory thresholds, our results suggest an increased central gain following sound exposure and an abnormal balance between excitatory and inhibitory responses in the midbrain up to two weeks after overexposure. These findings may be associated with hyperacusis after a sound-induced temporary threshold shift. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  20. The Flooding of Long Island Sound

    Science.gov (United States)

    Thomas, E.; Varekamp, J. C.; Lewis, R. S.

    2007-12-01

    Between the Last Glacial Maximum (22-19 ka) and the Holocene (10 ka) regions marginal to the Laurentide Ice Sheets saw complex environmental changes from moraines to lake basins to dry land to estuaries and marginal ocean basins, as a result of the interplay between the topography of moraines formed at the maximum extent and during stages of the retreat of the ice sheet, regional glacial rebound, and global eustatic sea level rise. In New England, the history of deglaciation and relative sea level rise has been studied extensively, and the sequence of events has been documented in detail. The Laurentide Ice Sheet reached its maximum extent (Long Island) at 21.3-20.4 ka according to radiocarbon dating (calibrated ages), 19.0-18.4 ka according to radionuclide dating. Periglacial Lake Connecticut formed behind the moraines in what is now the Long Island Sound Basin. The lake drained through the moraine at its eastern end. Seismic records show that a fluvial system was cut into the exposed lake beds, and a wave-cut unconformity was produced during the marine flooding, which has been inferred to have occurred at about 15.5 ka (Melt Water Pulse 1A) through correlation with dated events on land. Vibracores from eastern Long Island Sound penetrate the unconformity and contain red, varved lake beds overlain by marine grey sands and silts with a dense concentration of oysters in life position above the erosional contact. The marine sediments consist of intertidal to shallow subtidal deposits with oysters, shallow-water foraminifera and litoral diatoms, overlain by somewhat laminated sandy silts, in turn overlain by coarser-grained, sandy to silty sediments with reworked foraminifera and bivalve fragments. The latter may have been deposited in a sand-wave environment as present today at the core locations. We provide direct age control of the transgression with 30 radiocarbon dates on oysters, and compared the ages with those obtained on macrophytes and bulk organic carbon in

  1. Numerical Model on Sound-Solid Coupling in Human Ear and Study on Sound Pressure of Tympanic Membrane

    Directory of Open Access Journals (Sweden)

    Yao Wen-juan

    2011-01-01

    Full Text Available Establishment of three-dimensional finite-element model of the whole auditory system includes external ear, middle ear, and inner ear. The sound-solid-liquid coupling frequency response analysis of the model was carried out. The correctness of the FE model was verified by comparing the vibration modes of tympanic membrane and stapes footplate with the experimental data. According to calculation results of the model, we make use of the least squares method to fit out the distribution of sound pressure of external auditory canal and obtain the sound pressure function on the tympanic membrane which varies with frequency. Using the sound pressure function, the pressure distribution on the tympanic membrane can be directly derived from the sound pressure at the external auditory canal opening. The sound pressure function can make the boundary conditions of the middle ear structure more accurate in the mechanical research and improve the previous boundary treatment which only applied uniform pressure acting to the tympanic membrane.

  2. MP3 player listening sound pressure levels among 10 to 17 year old students.

    Science.gov (United States)

    Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M

    2011-11-01

    Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤  75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.

  3. Sounds in one-dimensional superfluid helium

    International Nuclear Information System (INIS)

    Um, C.I.; Kahng, W.H.; Whang, E.H.; Hong, S.K.; Oh, H.G.; George, T.F.

    1989-01-01

    The temperature variations of first-, second-, and third-sound velocity and attenuation coefficients in one-dimensional superfluid helium are evaluated explicitly for very low temperatures and frequencies (ω/sub s/tau 2 , and the ratio of second sound to first sound becomes unity as the temperature decreases to absolute zero

  4. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  5. Chronic scream sound exposure alters memory and monoamine levels in female rat brain.

    Science.gov (United States)

    Hu, Lili; Zhao, Xiaoge; Yang, Juan; Wang, Lumin; Yang, Yang; Song, Tusheng; Huang, Chen

    2014-10-01

    Chronic scream sound alters the cognitive performance of male rats and their brain monoamine levels, these stress-induced alterations are sexually dimorphic. To determine the effects of sound stress on female rats, we examined their serum corticosterone levels and their adrenal, splenic, and thymic weights, their cognitive performance and the levels of monoamine neurotransmitters and their metabolites in the brain. Adult female Sprague-Dawley rats, with and without exposure to scream sound (4h/day for 21 day) were tested for spatial learning and memory using a Morris water maze. Stress decreased serum corticosterone levels, as well as splenic and adrenal weight. It also impaired spatial memory but did not affect the learning ability. Monoamines and metabolites were measured in the prefrontal cortex (PFC), striatum, hypothalamus, and hippocampus. The dopamine (DA) levels in the PFC decreased but the homovanillic acid/DA ratio increased. The decreased DA and the increased 5-hydroxyindoleacetic acid (5-HIAA) levels were observed in the striatum. Only the 5-HIAA level increased in the hypothalamus. In the hippocampus, stress did not affect the levels of monoamines and metabolites. The results suggest that scream sound stress influences most physiologic parameters, memory, and the levels of monoamine neurotransmitter and their metabolites in female rats. Copyright © 2014. Published by Elsevier Inc.

  6. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  7. Temporal and Spatial Comparisons of Underwater Sound Signatures of Different Reef Habitats in Moorea Island, French Polynesia.

    Directory of Open Access Journals (Sweden)

    Frédéric Bertucci

    Full Text Available As environmental sounds are used by larval fish and crustaceans to locate and orientate towards habitat during settlement, variations in the acoustic signature produced by habitats could provide valuable information about habitat quality, helping larvae to differentiate between potential settlement sites. However, very little is known about how acoustic signatures differ between proximate habitats. This study described within- and between-site differences in the sound spectra of five contiguous habitats at Moorea Island, French Polynesia: the inner reef crest, the barrier reef, the fringing reef, a pass and a coastal mangrove forest. Habitats with coral (inner, barrier and fringing reefs were characterized by a similar sound spectrum with average intensities ranging from 70 to 78 dB re 1 μPa.Hz(-1. The mangrove forest had a lower sound intensity of 70 dB re 1 μPa.Hz(-1 while the pass was characterized by a higher sound level with an average intensity of 91 dB re 1 μPa.Hz(-1. Habitats showed significantly different intensities for most frequencies, and a decreasing intensity gradient was observed from the reef to the shore. While habitats close to the shore showed no significant diel variation in sound intensities, sound levels increased at the pass during the night and barrier reef during the day. These two habitats also appeared to be louder in the North than in the West. These findings suggest that daily variations in sound intensity and across-reef sound gradients could be a valuable source of information for settling larvae. They also provide further evidence that closely related habitats, separated by less than 1 km, can differ significantly in their spectral composition and that these signatures might be typical and conserved along the coast of Moorea.

  8. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  9. Otolith research for Puget Sound

    Science.gov (United States)

    Larsen, K.; Reisenbichler, R.

    2007-01-01

    Otoliths are hard structures located in the brain cavity of fish. These structures are formed by a buildup of calcium carbonate within a gelatinous matrix that produces light and dark bands similar to the growth rings in trees. The width of the bands corresponds to environmental factors such as temperature and food availability. As juvenile salmon encounter different environments in their migration to sea, they produce growth increments of varying widths and visible 'checks' corresponding to times of stress or change. The resulting pattern of band variations and check marks leave a record of fish growth and residence time in each habitat type. This information helps Puget Sound restoration by determining the importance of different habitats for the optimal health and management of different salmon populations. The USGS Western Fisheries Research Center (WFRC) provides otolith research findings directly to resource managers who put this information to work.

  10. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  11. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  12. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  13. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  14. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    Science.gov (United States)

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  15. WODA Technical Guidance on Underwater Sound from Dredging.

    Science.gov (United States)

    Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders

    2016-01-01

    The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.

  16. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  17. Directional sound radiation from substation transformers

    International Nuclear Information System (INIS)

    Maybee, N.

    2009-01-01

    This paper presented the results of a study in which acoustical measurements at two substations were analyzed to investigate the directional behaviour of typical arrays having 2 or 3 transformers. Substation transformers produce a characteristic humming sound that is caused primarily by vibration of the core at twice the frequency of the power supply. The humming noise radiates predominantly from the tank enclosing the core. The main components of the sound are harmonics of 120 Hz. Sound pressure level data were obtained for various directions and distances from the arrays, ranging from 0.5 m to over 100 m. The measured sound pressure levels of the transformer tones displayed substantial positive and negative excursions from the calculated average values for many distances and directions. The results support the concept that the directional effects are associated with constructive and destructive interference of tonal sound waves emanating from different parts of the array. Significant variations in the directional sound pattern can occur in the near field of a single transformer or an array, and the extent of the near field is significantly larger than the scale of the array. Based on typical dimensions for substation sites, the distance to the far field may be much beyond the substation boundary and beyond typical setbacks to the closest dwellings. As such, the directional sound radiation produced by transformer arrays introduces additional uncertainty in the prediction of substation sound levels at dwellings within a few hundred meters of a substation site. 4 refs., 4 figs.

  18. Mapping saltwater intrusion in the Biscayne Aquifer, Miami-Dade County, Florida using transient electromagnetic sounding

    Science.gov (United States)

    Fitterman, David V.

    2014-01-01

    Saltwater intrusion in southern Florida poses a potential threat to the public drinking-water supply that is typically monitored using water samples and electromagnetic induction logs collected from a network of wells. Transient electromagnetic (TEM) soundings are a complementary addition to the monitoring program because of their ease of use, low cost, and ability to fill in data gaps between wells. TEM soundings have been used to map saltwater intrusion in the Biscayne aquifer over a large part of south Florida including eastern Miami-Dade County and the Everglades. These two areas are very different with one being urban and the other undeveloped. Each poses different conditions that affect data collection and data quality. In the developed areas, finding sites large enough to make soundings is difficult. The presence of underground pipes further restricts useable locations. Electromagnetic noise, which reduces data quality, is also an issue. In the Everglades, access to field sites is difficult and working in water-covered terrain is challenging. Nonetheless, TEM soundings are an effective tool for mapping saltwater intrusion. Direct estimates of water quality can be obtained from the inverted TEM data using a formation factor determined for the Biscayne aquifer. This formation factor is remarkably constant over Miami-Dade County owing to the uniformity of the aquifer and the absence of clay. Thirty-six TEM soundings were collected in the Model Land area of southeast Miami-Dade County to aid in calibration of a helicopter electromagnetic (HEM) survey. The soundings and HEM survey revealed an area of saltwater intrusion aligned with canals and drainage ditches along U.S. Highway 1 and the Card Sound Road. These canals and ditches likely reduced freshwater levels through unregulated drainage and provided pathways for seawater to flow at least 12.4 km inland.

  19. New method to improve the accuracy of quench position measurement on a superconducting cavity by a second sound method

    Directory of Open Access Journals (Sweden)

    ZhenChao Liu

    2012-09-01

    Full Text Available Quench is a common phenomenon in a superconducting cavity and often limits the accelerating gradient of the cavity. Accurate location of the quench site, typically located at a material or geometrical defect, is the key to improve the cavity accelerating gradient. Here, the second sound propagation in liquid helium II is used to detect the quench location on the cavity. The technique is relatively convenient and complements the traditional temperature mapping which measures the “prequench” temperature rise on the cavity using an array of sensors. The speed of the second sound in liquid helium II is roughly 1.7  cm/ms at 2 K which is sufficiently fast to provide a millimeter-size position resolution. However, the dynamics of the quench at the cavity surface are also found to significantly affect the achievable resolution with real cavities. Here we use a dynamic quench model, based on ANSYS, to calculate the quench area and the temperature distribution on the cavity. The detection error caused by the thermal conduction in the niobium was calculated.

  20. On the sound absorption coefficient of porous asphalt pavements for oblique incident sound waves

    NARCIS (Netherlands)

    Bezemer-Krijnen, Marieke; Wijnant, Ysbrand H.; de Boer, Andries; Bekke, Dirk; Davy, J.; Don, Ch.; McMinn, T.; Dowsett, L.; Broner, N.; Burgess, M.

    2014-01-01

    A rolling tyre will radiate noise in all directions. However, conventional measurement techniques for the sound absorption of surfaces only give the absorption coefficient for normal incidence. In this paper, a measurement technique is described with which it is possible to perform in situ sound

  1. Ocean current velocity, temperature and salinity collected during 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0088063)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Nortek 600kHz Aquadopp acoustic current profilers were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound, Puerto Rico,...

  2. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    Science.gov (United States)

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  4. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  5. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  6. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  7. Aperture size, materiality of the secondary room and listener location: Impact on the simulated impulse response of a coupled-volume concert hall

    Science.gov (United States)

    Ermann, Michael; Johnson, Marty E.; Harrison, Byron W.

    2003-04-01

    By adding a second room to a concert hall, and designing doors to control the sonic transparency between the two rooms, designers can create a new, coupled acoustic. Concert halls use coupling to achieve a variable, longer and distinct reverberant quality for their musicians and listeners. For this study, a coupled-volume concert hall based on an existing performing arts center is conceived and computer-modeled. It has a fixed geometric volume, form and primary-room sound absorption. Ray-tracing software simulates impulse responses, varying both aperture size and secondary-room sound absorption level, across a grid of receiver (listener) locations. The results are compared with statistical analysis that suggests a highly sensitive relationship between the double-sloped condition and the architecture of the space. This line of study aims to quantitatively and spatially correlate the double-sloped condition with (1) aperture size exposing the chamber, (2) sound absorptance in the coupled volume, and (3) listener location.

  8. Using tensorial electrical resistivity survey to locate fault systems

    International Nuclear Information System (INIS)

    Monteiro Santos, Fernando A; Plancha, João P; Marques, Jorge; Perea, Hector; Cabral, João; Massoud, Usama

    2009-01-01

    This paper deals with the use of the tensorial resistivity method for fault orientation and macroanisotropy characterization. The rotational properties of the apparent resistivity tensor are presented using 3D synthetic models representing structures with a dominant direction of low resistivity and vertical discontinuities. It is demonstrated that polar diagrams of the elements of the tensor are effective in delineating those structures. As the apparent resistivity tensor shows great inefficacy in investigating the depth of the structures, it is advised to accomplish tensorial surveys with the application of other geophysical methods. An experimental example, including tensorial, dipole–dipole and time domain surveys, is presented to illustrate the potentiality of the method. The dipole–dipole model shows high-resistivity contrasts which were interpreted as corresponding to faults crossing the area. The results from the time domain electromagnetic (TEM) sounding show high-resistivity values till depths of 40–60 m at the north part of the area. In the southern part of the survey area the soundings show an upper layer with low-resistivity values (around 30 Ω m) followed by a more resistive bedrock (resistivity >100 Ω m) at a depth ranging from 15 to 30 m. The soundings in the central part of the survey area show more variability. A thin conductive overburden is followed by a more resistive layer with resistivity in the range of 80–1800 Ω m. The north and south limits of the central part of the area as revealed by TEM survey are roughly E–W oriented and coincident with the north fault scarp and the southernmost fault detected by the dipole–dipole survey. The pattern of the polar diagrams calculated from tensorial resistivity data clearly indicates the presence of a contact between two blocks at south of the survey area with the low-resistivity block located southwards. The presence of other two faults is not so clear from the polar diagram patterns, but

  9. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  10. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  11. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  12. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  13. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  14. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  15. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  16. High frequency ion sound waves associated with Langmuir waves in type III radio burst source regions

    Directory of Open Access Journals (Sweden)

    G. Thejappa

    2004-01-01

    Full Text Available Short wavelength ion sound waves (2-4kHz are detected in association with the Langmuir waves (~15-30kHz in the source regions of several local type III radio bursts. They are most probably not due to any resonant wave-wave interactions such as the electrostatic decay instability because their wavelengths are much shorter than those of Langmuir waves. The Langmuir waves occur as coherent field structures with peak intensities exceeding the Langmuir collapse thresholds. Their scale sizes are of the order of the wavelength of an ion sound wave. These Langmuir wave field characteristics indicate that the observed short wavelength ion sound waves are most probably generated during the thermalization of the burnt-out cavitons left behind by the Langmuir collapse. Moreover, the peak intensities of the observed short wavelength ion sound waves are comparable to the expected intensities of those ion sound waves radiated by the burnt-out cavitons. However, the speeds of the electron beams derived from the frequency drift of type III radio bursts are too slow to satisfy the needed adiabatic ion approximation. Therefore, some non-linear process such as the induced scattering on thermal ions most probably pumps the beam excited Langmuir waves towards the lower wavenumbers, where the adiabatic ion approximation is justified.

  17. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  18. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  19. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    Science.gov (United States)

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior

  20. Flaw location and characterization in anisotropic materials by ultrasonic spectral analysis

    International Nuclear Information System (INIS)

    Adler, L.; Cook, K.V.; Simpson, W.A.; Lewis, D.K.

    1978-01-01

    A method of quantitatively determining size and location of flaws in anisotropic materials such as stainless steel welds is described. In previous work, it was shown that spectral analysis of a broad band ultrasonic pulse scattered from a defect can be used to determine size and orientation in isotropic materials if the velocity of sound in the material is known. In an anisotropic structural material (stainless steel weld, centrifugal cast pipe), the velocity (both shear and longitudinal) is direction-dependent. When anisotropy is not taken into account, defect location and defect size estimation is misjudged. It will be shown that the effect of this structural variation in materials must be considered to obtain the correct size and location of defects by frequency analysis. A theoretical calculation, including anisotropy, of the scattered field from defects will also be presented

  1. Microflown based monopole sound sources for reciprocal measurements

    NARCIS (Netherlands)

    Bree, H.E. de; Basten, T.G.H.

    2008-01-01

    Monopole sound sources (i.e. omni directional sound sources with a known volume velocity) are essential for reciprocal measurements used in vehicle interior panel noise contribution analysis. Until recently, these monopole sound sources use a sound pressure transducer sensor as a reference sensor. A

  2. Reef Sound as an Orientation Cue for Shoreward Migration by Pueruli of the Rock Lobster, Jasus edwardsii.

    Science.gov (United States)

    Hinojosa, Ivan A; Green, Bridget S; Gardner, Caleb; Hesse, Jan; Stanley, Jenni A; Jeffs, Andrew G

    2016-01-01

    The post-larval or puerulus stage of spiny, or rock, lobsters (Palinuridae) swim many kilometres from open oceans into coastal waters where they subsequently settle. The orientation cues used by the puerulus for this migration are unclear, but are presumed to be critical to finding a place to settle. Understanding this process may help explain the biological processes of dispersal and settlement, and be useful for developing realistic dispersal models. In this study, we examined the use of reef sound as an orientation cue by the puerulus stage of the southern rock lobster, Jasus edwardsii. Experiments were conducted using in situ binary choice chambers together with replayed recording of underwater reef sound. The experiment was conducted in a sandy lagoon under varying wind conditions. A significant proportion of puerulus (69%) swam towards the reef sound in calm wind conditions. However, in windy conditions (>25 m s-1) the orientation behaviour appeared to be less consistent with the inclusion of these results, reducing the overall proportion of pueruli that swam towards the reef sound (59.3%). These results resolve previous speculation that underwater reef sound is used as an orientation cue in the shoreward migration of the puerulus of spiny lobsters, and suggest that sea surface winds may moderate the ability of migrating pueruli to use this cue to locate coastal reef habitat to settle. Underwater sound may increase the chance of successful settlement and survival of this valuable species.

  3. Pacific and Atlantic herring produce burst pulse sounds.

    Science.gov (United States)

    Wilson, Ben; Batty, Robert S; Dill, Lawrence M

    2004-02-07

    The commercial importance of Pacific and Atlantic herring (Clupea pallasii and Clupea harengus) has ensured that much of their biology has received attention. However, their sound production remains poorly studied. We describe the sounds made by captive wild-caught herring. Pacific herring produce distinctive bursts of pulses, termed Fast Repetitive Tick (FRT) sounds. These trains of broadband pulses (1.7-22 kHz) lasted between 0.6 s and 7.6 s. Most were produced at night; feeding regime did not affect their frequency, and fish produced FRT sounds without direct access to the air. Digestive gas or gulped air transfer to the swim bladder, therefore, do not appear to be responsible for FRT sound generation. Atlantic herring also produce FRT sounds, and video analysis showed an association with bubble expulsion from the anal duct region (i.e. from the gut or swim bladder). To the best of the authors' knowledge, sound production by such means has not previously been described. The function(s) of these sounds are unknown, but as the per capita rates of sound production by fish at higher densities were greater, social mediation appears likely. These sounds may have consequences for our understanding of herring behaviour and the effects of noise pollution.

  4. 7 CFR 29.6036 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.6036 Section 29.6036 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Definitions § 29.6036 Sound. Free of damage. (See Rule 4.) ...

  5. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  6. Liquid structure and temperature invariance of sound velocity in supercooled Bi melt

    International Nuclear Information System (INIS)

    Emuna, M.; Mayo, M.; Makov, G.; Greenberg, Y.; Caspi, E. N.; Yahel, E.; Beuneu, B.

    2014-01-01

    Structural rearrangement of liquid Bi in the vicinity of the melting point has been proposed due to the unique temperature invariant sound velocity observed above the melting temperature, the low symmetry of Bi in the solid phase and the necessity of overheating to achieve supercooling. The existence of this structural rearrangement is examined by measurements on supercooled Bi. The sound velocity of liquid Bi was measured into the supercooled region to high accuracy and it was found to be invariant over a temperature range of ∼60°, from 35° above the melting point to ∼25° into the supercooled region. The structural origin of this phenomenon was explored by neutron diffraction structural measurements in the supercooled temperature range. These measurements indicate a continuous modification of the short range order in the melt. The structure of the liquid is analyzed within a quasi-crystalline model and is found to evolve continuously, similar to other known liquid pnictide systems. The results are discussed in the context of two competing hypotheses proposed to explain properties of liquid Bi near the melting: (i) liquid bismuth undergoes a structural rearrangement slightly above melting and (ii) liquid Bi exhibits a broad maximum in the sound velocity located incidentally at the melting temperature

  7. Misconceptions About Sound Among Engineering Students

    Science.gov (United States)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  8. Preferred retinal location induced by macular occlusion in a target recognition task

    Science.gov (United States)

    Ness, James W.; Zwick, Harry; Molchany, Jerome W.

    1996-04-01

    Laser-induced central retinal damage not only may diminish visual function, but also may diminish afferent input that provides the ocular motor system with the feedback necessary to move the target to the fovea. Local visual field stabilizations have been used to demonstrate that central artificial occlusions in the normal retina suppress visual function. The purpose of this paper is to evaluate the effect of local field stabilizations on the ocular motor system in a contrast sensitivity task. Five subjects who tested normal in a standard clinical eye exam viewed landolt rings at varying visual angles under three artificial scotoma conditions and a no scotoma condition. The scotoma conditions were a 2 degree(s) and 5 degree(s) stabilized central scotoma and a 2 degree(s) stabilized scotoma positioned 1 degree(s) nasal to the fovea. A Dual Purkinje Eye-Tracker (SRI, version 5) was used to provide eye-position data and to stabilize the artificial scotoma on the retina. The data showed a consistent preference for placing the target in the superior retina under the 2 degree(s) and 5 degree(s) conditions with a strong positive correlation between visual angle and deflection of the eye position into the superior retina. These data suggest that loss of visual function from laser-induced foveal damage may be due in part to a disruption in the ocular motor system. Thus, even if some function remains in the damage site ophthalmoscopically, the ocular motor system may organize around a nonfoveal retinal location, behaviorally suppressing foveal input.

  9. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  10. Sound Equipment Fabrication and Values in Nigerian Theatre ...

    African Journals Online (AJOL)

    The main points of this paper is to discover ways of fabricating sound and sound effects equipment for theatrical productions in Nigeria have become of essence since most educational theatres cannot afford western sound and sound effects equipment. Even when available, they are old fashioned, compared to the ...

  11. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  12. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  13. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  14. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    Science.gov (United States)

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  16. Infrastructure and industrial location : a dual technology approach

    OpenAIRE

    Bjorvatn, Kjetil

    2001-01-01

    The paper investigates how differences in infrastructure quality may affect industrial location between countries. Employing a dualtechnology model, the main result of the paper is the somewhat surprising conclusion that an improvement in a country’s infrastructure may weaken its locational advantage and induce a firm to locate production in a country with a less efficient infrastructure.

  17. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  18. PULSAR.MAKING VISIBLE THE SOUND OF STARS

    OpenAIRE

    Lega, Ferran

    2015-01-01

    [EN] Pulsar, making visible the sound of stars is a comunication based on a sound Installation raised as a site-specific project to show the hidden abilities of sound to generate images and patterns on the matter, using the acoustic science of cymatics. The objective of this communication will show people how through abstract and intangible sounds from celestial orbs of cosmos (radio waves generated by electromagnetic pulses from the rotation of neutrón stars), we can create ar...

  19. 7 CFR 29.2298 - Sound.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sound. 29.2298 Section 29.2298 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Official Standard Grades for Virginia Fire-Cured Tobacco (u.s. Type 21) § 29.2298 Sound...

  20. Cognitive Control of Involuntary Distraction by Deviant Sounds

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Hebrero, Maria

    2013-01-01

    It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…