WorldWideScience

Sample records for sound crossing time

  1. Sound Cross-synthesis and Morphing Using Dictionary-based Methods

    DEFF Research Database (Denmark)

    Collins, Nick; Sturm, Bob L.

    2011-01-01

    Dictionary-based methods (DBMs) provide rich possibilities for new sound transformations; as the analysis dual to granular synthesis, audio signals are decomposed into `atoms', allowing interesting manipulations. We present various approaches to audio signal cross-synthesis and cross-analysis via...... atomic decomposition using scale-time-frequency dictionaries. DBMs naturally provide high-level descriptions of a signal and its content, which can allow for greater control over what is modified and how. Through these models, we can make one signal decomposition influence that of another to create cross......-synthesized sounds. We present several examples of these techniques both theoretically and practically, and present on-going and further work....

  2. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    Science.gov (United States)

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Cross-Modal Correspondence between Brightness and Chinese Speech Sound with Aspiration

    Directory of Open Access Journals (Sweden)

    Sachiko Hirata

    2011-10-01

    Full Text Available Phonetic symbolism is the phenomenon of speech sounds evoking images based on sensory experiences; it is often discussed with cross-modal correspondence. By using Garner's task, Hirata, Kita, and Ukita (2009 showed the cross-modal congruence between brightness and voiced/voiceless consonants in Japanese speech sound, which is known as phonetic symbolism. In the present study, we examined the effect of the meaning of mimetics (lexical words whose sound reflects its meaning, like “ding-dong” in Japanese language on the cross-modal correspondence. We conducted an experiment with Chinese speech sounds with or without aspiration using Chinese people. Chinese vocabulary also contains mimetics but the existence of aspiration doesn't relate to the meaning of Chinese mimetics. As a result, Chinese speech sounds with aspiration, which resemble voiceless consonants, were matched with white color, whereas those without aspiration were matched with black. This result is identical to its pattern in Japanese people and consequently suggests that cross-modal correspondence occurs without the effect of the meaning of mimetics. The problem that whether these cross-modal correspondences are purely based on physical properties of speech sound or affected from phonetic properties remains for further study.

  4. Sound field separation with cross measurement surfaces.

    Directory of Open Access Journals (Sweden)

    Jin Mao

    Full Text Available With conventional near-field acoustical holography, it is impossible to identify sound pressure when the coherent sound sources are located on the same side of the array. This paper proposes a solution, using cross measurement surfaces to separate the sources based on the equivalent source method. Each equivalent source surface is built in the center of the corresponding original source with a spherical surface. According to the different transfer matrices between equivalent sources and points on holographic surfaces, the weighting of each equivalent source from coherent sources can be obtained. Numerical and experimental studies have been performed to test the method. For the sound pressure including noise after separation in the experiment, the calculation accuracy can be improved by reconstructing the pressure with Tikhonov regularization and the L-curve method. On the whole, a single source can be effectively separated from coherent sources using cross measurement.

  5. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    Science.gov (United States)

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  6. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  7. Soundness of Timed-Arc Workflow Nets in Discrete and Continuous-Time Semantics

    DEFF Research Database (Denmark)

    Mateo, Jose Antonio; Srba, Jiri; Sørensen, Mathias Grund

    2015-01-01

    Analysis of workflow processes with quantitative aspectslike timing is of interest in numerous time-critical applications. We suggest a workflow model based on timed-arc Petri nets and studythe foundational problems of soundness and strong (time-bounded) soundness.We first consider the discrete-t...

  8. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  9. Aurally-adequate time-frequency analysis for scattered sound in auditoria

    Science.gov (United States)

    Norris, Molly K.; Xiang, Ning; Kleiner, Mendel

    2005-04-01

    The goal of this work was to apply an aurally-adequate time-frequency analysis technique to the analysis of sound scattering effects in auditoria. Time-frequency representations were developed as a motivated effort that takes into account binaural hearing, with a specific implementation of interaural cross-correlation process. A model of the human auditory system was implemented in the MATLAB platform based on two previous models [A. Härmä and K. Palomäki, HUTear, Espoo, Finland; and M. A. Akeroyd, A. Binaural Cross-correlogram Toolbox for MATLAB (2001), University of Sussex, Brighton]. These stages include proper frequency selectivity, the conversion of the mechanical motion of the basilar membrane to neural impulses, and binaural hearing effects. The model was then used in the analysis of room impulse responses with varying scattering characteristics. This paper discusses the analysis results using simulated and measured room impulse responses. [Work supported by the Frank H. and Eva B. Buck Foundation.

  10. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  11. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  12. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  14. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection

    Directory of Open Access Journals (Sweden)

    Shih-Hong Li

    2017-01-01

    Full Text Available In the clinic, the wheezing sound is usually considered as an indicator symptom to reflect the degree of airway obstruction. The auscultation approach is the most common way to diagnose wheezing sounds, but it subjectively depends on the experience of the physician. Several previous studies attempted to extract the features of breathing sounds to detect wheezing sounds automatically. However, there is still a lack of suitable monitoring systems for real-time wheeze detection in daily life. In this study, a wearable and wireless breathing sound monitoring system for real-time wheeze detection was proposed. Moreover, a breathing sounds analysis algorithm was designed to continuously extract and analyze the features of breathing sounds to provide the objectively quantitative information of breathing sounds to professional physicians. Here, normalized spectral integration (NSI was also designed and applied in wheeze detection. The proposed algorithm required only short-term data of breathing sounds and lower computational complexity to perform real-time wheeze detection, and is suitable to be implemented in a commercial portable device, which contains relatively low computing power and memory. From the experimental results, the proposed system could provide good performance on wheeze detection exactly and might be a useful assisting tool for analysis of breathing sounds in clinical diagnosis.

  15. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    Science.gov (United States)

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  16. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  17. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  18. Distraction by novel and pitch-deviant sounds in children

    Directory of Open Access Journals (Sweden)

    Nicole Wetzel

    2016-12-01

    Full Text Available The control of attention is an important part of our executive functions and enables us to focus on relevant information and to ignore irrelevant information. The ability to shield against distraction by task-irrelevant sounds is suggested to mature during school age. The present study investigated the developmental time course of distraction in three groups of children aged 7 – 10 years. Two different types of distractor sounds that have been frequently used in auditory attention research – novel environmental and pitch-deviant sounds – were presented within an oddball paradigm while children performed a visual categorization task. Reaction time measurements revealed decreasing distractor-related impairment with age. Novel environmental sounds impaired performance in the categorization task more than pitch-deviant sounds. The youngest children showed a pronounced decline of novel-related distraction effects throughout the experimental session. Such a significant decline as a result of practice was not observed in the pitch-deviant condition and not in older children. We observed no correlation between cross-modal distraction effects and performance in standardized tests of concentration and visual distraction. Results of the cross-modal distraction paradigm indicate that separate mechanisms underlying the processing of novel environmental and pitch-deviant sounds develop with different time courses and that these mechanisms develop considerably within a few years in middle childhood.

  19. A Coincidental Sound Track for "Time Flies"

    Science.gov (United States)

    Cardany, Audrey Berger

    2014-01-01

    Sound tracks serve a valuable purpose in film and video by helping tell a story, create a mood, and signal coming events. Holst's "Mars" from "The Planets" yields a coincidental soundtrack to Eric Rohmann's Caldecott-winning book, "Time Flies." This pairing provides opportunities for upper elementary and…

  20. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  1. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  2. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  3. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    Science.gov (United States)

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity

  4. Cross-modal selective attention: on the difficulty of ignoring sounds at the locus of visual attention.

    Science.gov (United States)

    Spence, C; Ranson, J; Driver, J

    2000-02-01

    In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.

  5. Extraction of ground reaction forces for real-time synthesis of walking sounds

    OpenAIRE

    Serafin, Stefania; Turchet, Luca; Nordahl, Rolf

    2009-01-01

    A shoe-independent system to synthesize real-time footstep sounds on different materials has beendeveloped. A footstep sound is considered as the result of an interaction between an exciter (the shoe) and aresonator (the floor). To achieve our goal, we propose two different solutions. The first solution is based oncontact microphones attached on the exterior part of each shoe, which capture the sound of a footstep. Thesecond approach consists on using microphones placed on the floor. In both ...

  6. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    Science.gov (United States)

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All

  7. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time.

    Science.gov (United States)

    Küssner, Mats B; Tidhar, Dan; Prior, Helen M; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures-accounting for the intrinsic link between movement and sound-are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones-continually sounding and concurrently varied in pitch, loudness and tempo-with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising-falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.

  8. Extraction of ground reaction forces for real-time synthesis of walking sounds

    DEFF Research Database (Denmark)

    Serafin, Stefania; Turchet, Luca; Nordahl, Rolf

    2009-01-01

    A shoe-independent system to synthesize real-time footstep sounds on different materials has been developed. A footstep sound is considered as the result of an interaction between an exciter (the shoe) and a resonator (the floor). To achieve our goal, we propose two different solutions. The first...

  9. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Science.gov (United States)

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the

  10. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Directory of Open Access Journals (Sweden)

    Xiuwen Sun

    2018-03-01

    Full Text Available Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity, each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent. In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates

  11. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  12. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  13. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  14. Time characteristics of distortion product otoacoustic emissions recovery function after moderate sound exposure

    DEFF Research Database (Denmark)

    de Toro, Miguel Angel Aranda; Ordoñez, Rodrigo Pizarro; Hammershøi, Dorte

    2006-01-01

    Exposure to sound of moderate level temporarily attenuates the amplitude of distortion product otoacoustic emissions (DPOAEs). These changes are similar to the changes observed in absolute hearing thresholds after similar sound exposures. To be able to assess changes over time across a broad...

  15. Time-domain electromagnetic soundings collected in Dawson County, Nebraska, 2007-09

    Science.gov (United States)

    Payne, Jason; Teeple, Andrew

    2011-01-01

    Between April 2007 and November 2009, the U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, collected time-domain electro-magnetic (TDEM) soundings at 14 locations in Dawson County, Nebraska. The TDEM soundings provide information pertaining to the hydrogeology at each of 23 sites at the 14 locations; 30 TDEM surface geophysical soundings were collected at the 14 locations to develop smooth and layered-earth resistivity models of the subsurface at each site. The soundings yield estimates of subsurface electrical resistivity; variations in subsurface electrical resistivity can be correlated with hydrogeologic and stratigraphic units. Results from each sounding were used to calculate resistivity to depths of approximately 90-130 meters (depending on loop size) below the land surface. Geonics Protem 47 and 57 systems, as well as the Alpha Geoscience TerraTEM, were used to collect the TDEM soundings (voltage data from which resistivity is calculated). For each sounding, voltage data were averaged and evaluated statistically before inversion (inverse modeling). Inverse modeling is the process of creating an estimate of the true distribution of subsurface resistivity from the mea-sured apparent resistivity obtained from TDEM soundings. Smooth and layered-earth models were generated for each sounding. A smooth model is a vertical delineation of calculated apparent resistivity that represents a non-unique estimate of the true resistivity. Ridge regression (Interpex Limited, 1996) was used by the inversion software in a series of iterations to create a smooth model consisting of 24-30 layers for each sounding site. Layered-earth models were then generated based on results of smooth modeling. The layered-earth models are simplified (generally 1 to 6 layers) to represent geologic units with depth. Throughout the area, the layered-earth models range from 2 to 4 layers, depending on observed inflections in the raw data and smooth model

  16. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    Science.gov (United States)

    Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506

  17. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    Directory of Open Access Journals (Sweden)

    Mats B. Küssner

    2014-07-01

    Full Text Available Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked sixty-four musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesised musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy.Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space and tempo vs. speed (increasing tempo leading to increasing speed of hand movement associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e. rising-falling pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement, highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied music cognition. Implications for theoretical refinements and potential clinical applications are provided.

  18. Effects of instructed timing and tempo on snare drum sound in drum kit performance.

    Science.gov (United States)

    Danielsen, Anne; Waadeland, Carl Haakon; Sundt, Henrik G; Witek, Maria A G

    2015-10-01

    This paper reports on an experiment investigating the expressive means with which performers of groove-based musics signal the intended timing of a rhythmic event. Ten expert drummers were instructed to perform a rock pattern in three different tempi and three different timing styles: "laid-back," "on-the-beat," and "pushed." The results show that there were systematic differences in the intensity and timbre (i.e., sound-pressure level, temporal centroid, and spectral centroid) of series of snare strokes played with these different timing styles at the individual level. A common pattern was found across subjects concerning the effect of instructed timing on sound-pressure level: a majority of the drummers played laid-back strokes louder than on-the-beat strokes. Furthermore, when the tempo increased, there was a general increase in sound-pressure level and a decrease in spectral centroid across subjects. The results show that both temporal and sound-related features are important in order to indicate that a rhythmic event has been played intentionally early, late, or on-the-beat, and provide insight into the ways in which musicians communicate at the microrhythmic level in groove-based musics.

  19. Real-Time Detection of Important Sounds with a Wearable Vibration Based Device for Hearing-Impaired People

    Directory of Open Access Journals (Sweden)

    Mete Yağanoğlu

    2018-04-01

    Full Text Available Hearing-impaired people do not hear indoor and outdoor environment sounds, which are important for them both at home and outside. By means of a wearable device that we have developed, a hearing-impaired person will be informed of important sounds through vibrations, thereby understanding what kind of sound it is. Our system, which operates in real time, can achieve a success rate of 98% when estimating a door bell ringing sound, 99% success identifying an alarm sound, 99% success identifying a phone ringing, 91% success identifying honking, 93% success identifying brake sounds, 96% success identifying dog sounds, 97% success identifying human voice, and 96% success identifying other sounds using the audio fingerprint method. Audio fingerprint is a brief summary of an audio file, perceptively summarizing a piece of audio content. In this study, our wearable device is tested 100 times a day for 100 days on five deaf persons and 50 persons with normal hearing whose ears were covered by earphones that provided wind sounds. This study aims to improve the quality of life of deaf persons, and provide them a more prosperous life. In the questionnaire performed, deaf people rate the clarity of the system at 90%, usefulness at 97%, and the likelihood of using this device again at 100%.

  20. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  1. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  2. The effects of a sound-field amplification system on managerial time in middle school physical education settings.

    Science.gov (United States)

    Ryan, Stu

    2009-04-01

    The focus of this research effort was to examine the effect of a sound-field amplification system on managerial time in the beginning of class in a physical education setting. A multiple baseline design across participants was used to measure change in the managerial time of 2 middle school female physical education teachers using a portable sound-field amplification system. Managerial time is defined as the cumulative amount of time that students spend on organizational, transitional, and nonsubject matter tasks in a lesson. The findings showed that the amount of managerial time at the beginning of class clearly decreased when the teacher used sound-field amplification feedback to physical education students. Findings indicate an immediate need for administrators to determine the most appropriate, cost-effective procedure to support sound-field amplification systems in existing physical education settings.

  3. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  4. Interactive Sonification of Spontaneous Movement of Children - Cross-modal Mapping and the Perception of Body Movement Qualities through Sound

    Directory of Open Access Journals (Sweden)

    Emma Frid

    2016-11-01

    Full Text Available In this paper we present three studies focusing on the effect of different sound models ininteractive sonification of bodily movement. We hypothesized that a sound model characterizedby continuous smooth sounds would be associated with other movement characteristics thana model characterized by abrupt variation in amplitude and that these associations could bereflected in spontaneous movement characteristics. Three subsequent studies were conductedto investigate the relationship between properties of bodily movement and sound: (1 a motioncapture experiment involving interactive sonification of a group of children spontaneously movingin a room, (2 an experiment involving perceptual ratings of sonified movement data and (3an experiment involving matching between sonified movements and their visualizations in theform of abstract drawings. In (1 we used a system constituting of 17 IR cameras trackingpassive reflective markers. The head positions in the horizontal plane of 3-4 children weresimultaneously tracked and sonified, producing 3-4 sound sources spatially displayed throughan 8-channel loudspeaker system. We analyzed children’s spontaneous movement in termsof energy-, smoothness- and directness index. Despite large inter-participant variability andgroup-specific effects caused by interaction among children when engaging in the spontaneousmovement task, we found a small but significant effect of sound model. Results from (2 indicatethat different sound models can be rated differently on a set of motion-related perceptual scales(e.g. expressivity and fluidity. Also, results imply that audio-only stimuli can evoke strongerperceived properties of movement (e.g. energetic, impulsive than stimuli involving both audioand video representations. Findings in (3 suggest that sounds portraying bodily movementcan be represented using abstract drawings in a meaningful way. We argue that the resultsfrom these studies support the existence of a cross

  5. An integrative time-varying frequency detection and channel sounding method for dynamic plasma sheath

    Science.gov (United States)

    Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming

    2018-01-01

    The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.

  6. Predicting transmission of structure-borne sound power from machines by including terminal cross-coupling

    DEFF Research Database (Denmark)

    Ohlrich, Mogens

    2011-01-01

    of translational terminals in a global plane. This paired or bi-coupled power transmission represents the simplest case of cross-coupling. The procedure and quality of the predicted transmission using this improved technique is demonstrated experimentally for an electrical motor unit with an integrated radial fan......Structure-borne sound generated by audible vibration of machines in vehicles, equipment and house-hold appliances is often a major cause of noise. Such vibration of complex machines is mostly determined and quantified by measurements. It has been found that characterization of the vibratory source...

  7. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  8. Results of time-domain electromagnetic soundings in Miami-Dade and southern Broward Counties, Florida

    Science.gov (United States)

    Fitterman, David V.; Prinos, Scott T.

    2011-01-01

    Time-domain electromagnetic (TEM) soundings were made in Miami-Dade and southern Broward Counties to aid in mapping the landward extent of saltwater in the Biscayne aquifer. A total of 79 soundings were collected in settings ranging from urban to undeveloped land, with some of the former posing problems of land access and interference from anthropogenic features. TEM soundings combined with monitoring-well data were used to determine if the saltwater front had moved since the last time it was mapped, to provide additional spatial coverage where existing monitoring wells were insufficient, and to help interpret a previously collected helicopter electromagnetic (HEM) survey flown in the southernmost portion of the study area. TEM soundings were interpreted as layered resistivity-depth models. Using information from well logs and water-quality data, the resistivity of the freshwater saturated Biscayne aquifer is expected to be above 30 ohm-meters, and the saltwater-saturated aquifer will have resistivities of less than 10 ohm-meters allowing determination of water quality from the TEM interpretations. TEM models from 29 soundings were compared to electromagnetic induction logs collected in nearby monitoring wells. In general, the agreement of these results was very good, giving confidence in the use of the TEM data for mapping saltwater encroachment.

  9. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  10. Crossmodal deficit in dyslexic children: practice affects the neural timing of letter-speech sound integration

    Directory of Open Access Journals (Sweden)

    Gojko eŽarić

    2015-06-01

    Full Text Available A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills remains unknown. In this research, we examined changes in event-related potential (ERP measures of letter-speech sound integration over a 6-month period during which 9-year-old dyslexic readers (n=17 followed a training in letter-speech sound coupling next to their regular reading curriculum. We presented the Dutch spoken vowels /a/ and /o/ as standard and deviant stimuli in one auditory and two audiovisual oddball conditions. In one audiovisual condition (AV0, the letter ‘a’ was presented simultaneously with the vowels, while in the other (AV200 it was preceding vowel onset for 200 ms. Prior to the training (T1, dyslexic readers showed the expected pattern of typical auditory mismatch responses, together with the absence of letter-speech sound effects in a late negativity (LN window. After the training (T2, our results showed earlier (and enhanced crossmodal effects in the LN window. Most interestingly, earlier LN latency at T2 was significantly related to higher behavioral accuracy in letter-speech sound coupling. On a more general level, the timing of the earlier mismatch negativity (MMN in the simultaneous condition (AV0 measured at T1, significantly related to reading fluency at both T1 and T2 as well as with reading gains. Our findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.

  11. Sound Synthesis Affected by Physical Gestures in Real-Time

    DEFF Research Database (Denmark)

    Graugaard, Lars

    2006-01-01

    Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Two implementations are presented and experience with their use in performance is reported. A concept of sound shaping and sound colouring that connects an instrumental performer......’s playing and gesturest to sound synthesis is used. The results and future possibilities are discussed....

  12. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  13. Time-dependent stochastic inversion in acoustic tomography of the atmosphere with reciprocal sound transmission

    International Nuclear Information System (INIS)

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith; Ziemann, A

    2008-01-01

    Time-dependent stochastic inversion (TDSI) was recently developed for acoustic travel-time tomography of the atmosphere. This type of tomography allows reconstruction of temperature and wind-velocity fields given the location of sound sources and receivers and the travel times between all source–receiver pairs. The quality of reconstruction provided by TDSI depends on the geometry of the transducer array. However, TDSI has not been studied for the geometry with reciprocal sound transmission. This paper is focused on three aspects of TDSI. First, the use of TDSI in reciprocal sound transmission arrays is studied in numerical and physical experiments. Second, efficiency of time-dependent and ordinary stochastic inversion (SI) algorithms is studied in numerical experiments. Third, a new model of noise in the input data for TDSI is developed that accounts for systematic errors in transducer positions. It is shown that (i) a separation of the travel times into temperature and wind-velocity components in tomography with reciprocal transmission does not improve the reconstruction, (ii) TDSI yields a better reconstruction than SI and (iii) the developed model of noise yields an accurate reconstruction of turbulent fields and estimation of errors in the reconstruction

  14. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

  15. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data. PMID:27891074

  16. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  17. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  18. Entire Sound Representations Are Time-Compressed in Sensory Memory: Evidence from MMN.

    Science.gov (United States)

    Tamakoshi, Seiji; Minoura, Nanako; Katayama, Jun'ichi; Yagi, Akihiro

    2016-01-01

    In order to examine the encoding of partial silence included in a sound stimulus in neural representation, time flow of the sound representations was investigated using mismatch negativity (MMN), an ERP component that reflects neural representation in auditory sensory memory. Previous work suggested that time flow of auditory stimuli is compressed in neural representations. The stimuli used were a full-stimulus of 170 ms duration, an early-gap stimulus with silence for a 20-50 ms segment (i.e., an omitted segment), and a late-gap stimulus with an omitted segment of 110-140 ms. Peak MMNm latencies from oddball sequences of these stimuli, with a 500 ms SOA, did not reflect time point of the physical gap, suggesting that temporal information can be compressed in sensory memory. However, it was not clear whether the whole stimulus duration or only the omitted segment duration is compressed. Thus, stimuli were used in which the gap was replaced by a tone segment with a 1/4 sound pressure level (filled), as well as the gap stimuli. Combinations of full-stimuli and one of four gapped or filled stimuli (i.e., early gap, late gap, early filled, and late filled) were presented in an oddball sequence (85 vs. 15%). If compression occurs only for the gap duration, MMN latency for filled stimuli should show a different pattern from those for gap stimuli. MMN latencies for the filled conditions showed the same pattern as those for the gap conditions, indicating that the whole stimulus duration rather than only gap duration is compressed in sensory memory neural representation. These results suggest that temporal aspects of silence are encoded in the same manner as physical sound.

  19. Time and frequency weightings and the assessment of sound exposure

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; de Toro, Miguel Angel Aranda; Hammershøi, Dorte

    2010-01-01

    Since the development of averaging/integrating sound level meters and frequency weighting networks in the 1950’s, measurement of the physical characteristics of sound has not changed a great deal. Advances have occurred in how the measured values are used (day-night averages, limit and action...... of the exposure. This information is being used to investigate metrics that can differentiate temporal characteristics (impulsive, fluctuating) as well as frequency characteristics (narrow-band or tonal dominance) of sound exposures. This presentation gives an overview of the existing sound measurement...... and analysis methods, that can provide a better representation of the effects of sound exposures on the hearing system...

  20. Time domain acoustic contrast control implementation of sound zones for low-frequency input signals

    DEFF Research Database (Denmark)

    Schellekens, Daan H. M.; Møller, Martin Bo; Olsen, Martin

    2016-01-01

    Sound zones are two or more regions within a listening space where listeners are provided with personal audio. Acoustic contrast control (ACC) is a sound zoning method that maximizes the average squared sound pressure in one zone constrained to constant pressure in other zones. State......-of-the-art time domain broadband acoustic contrast control (BACC) methods are designed for anechoic environments. These methods are not able to realize a flat frequency response in a limited frequency range within a reverberant environment. Sound field control in a limited frequency range is a requirement...... to accommodate the effective working range of the loudspeakers. In this paper, a new BACC method is proposed which results in an implementation realizing a flat frequency response in the target zone. This method is applied in a bandlimited low-frequency scenario where the loudspeaker layout surrounds two...

  1. Self-mixing laser Doppler vibrometry with high optical sensitivity application to real-time sound reproduction

    CERN Document Server

    Abe, K; Ko, J Y

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 mu Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  2. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    International Nuclear Information System (INIS)

    Abe, Kazutaka; Otsuka, Kenju; Ko, Jing-Yuan

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 μPa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio

  3. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Kazutaka [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Otsuka, Kenju [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Ko, Jing-Yuan [Department of Physics, Tunghai University, 181 Taichung-kang Road, Section 3, Taichung 407, Taiwan (China)

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 {mu}Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  4. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    Science.gov (United States)

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  5. Takete and Maluma in Action: A Cross-Modal Relationship between Gestures and Sounds.

    Directory of Open Access Journals (Sweden)

    Kazuko Shinohara

    Full Text Available Despite Saussure's famous observation that sound-meaning relationships are in principle arbitrary, we now have a substantial body of evidence that sounds themselves can have meanings, patterns often referred to as "sound symbolism". Previous studies have found that particular sounds can be associated with particular meanings, and also with particular static visual shapes. Less well studied is the association between sounds and dynamic movements. Using a free elicitation method, the current experiment shows that several sound symbolic associations between sounds and dynamic movements exist: (1 front vowels are more likely to be associated with small movements than with large movements; (2 front vowels are more likely to be associated with angular movements than with round movements; (3 obstruents are more likely to be associated with angular movements than with round movements; (4 voiced obstruents are more likely to be associated with large movements than with small movements. All of these results are compatible with the results of the previous studies of sound symbolism using static images or meanings. Overall, the current study supports the hypothesis that particular dynamic motions can be associated with particular sounds. Building on the current results, we discuss a possible practical application of these sound symbolic associations in sports instructions.

  6. Takete and Maluma in Action: A Cross-Modal Relationship between Gestures and Sounds.

    Science.gov (United States)

    Shinohara, Kazuko; Yamauchi, Naoto; Kawahara, Shigeto; Tanaka, Hideyuki

    Despite Saussure's famous observation that sound-meaning relationships are in principle arbitrary, we now have a substantial body of evidence that sounds themselves can have meanings, patterns often referred to as "sound symbolism". Previous studies have found that particular sounds can be associated with particular meanings, and also with particular static visual shapes. Less well studied is the association between sounds and dynamic movements. Using a free elicitation method, the current experiment shows that several sound symbolic associations between sounds and dynamic movements exist: (1) front vowels are more likely to be associated with small movements than with large movements; (2) front vowels are more likely to be associated with angular movements than with round movements; (3) obstruents are more likely to be associated with angular movements than with round movements; (4) voiced obstruents are more likely to be associated with large movements than with small movements. All of these results are compatible with the results of the previous studies of sound symbolism using static images or meanings. Overall, the current study supports the hypothesis that particular dynamic motions can be associated with particular sounds. Building on the current results, we discuss a possible practical application of these sound symbolic associations in sports instructions.

  7. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    Energy Technology Data Exchange (ETDEWEB)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G., E-mail: akosovichev@solar.stanford.edu [Stanford University, HEPL, Stanford, CA 94305 (United States)

    2014-04-10

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  8. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    International Nuclear Information System (INIS)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G.

    2014-01-01

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.

  9. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound

    Science.gov (United States)

    Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%–100%) and a recall of 93.9% (range: 72.7%–100%) for the 71 episodes of dry swallows. PMID:27170905

  10. Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound.

    Science.gov (United States)

    Jayatilake, Dushyantha; Ueno, Tomoyuki; Teramoto, Yohei; Nakai, Kei; Hidaka, Kikue; Ayuzawa, Satoshi; Eguchi, Kiyoshi; Matsumura, Akira; Suzuki, Kenji

    2015-01-01

    Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%-100%) and a recall of 93.9% (range: 72.7%-100%) for the 71 episodes of dry swallows.

  11. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  12. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    OpenAIRE

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation ...

  13. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  14. Kinetic-sound propagation in dilute gas mixtures

    International Nuclear Information System (INIS)

    Campa, A.; Cohen, E.G.D.

    1989-01-01

    Kinetic sound is predicted in dilute disparate-mass binary gas mixtures, propagating exclusively in the light compound and much faster than ordinary sound. It should be detectable by light-scattering experiments, as an extended shoulder in the scattering cross section for large frequencies. As an example, H 2 -Ar mixtures are discussed

  15. Sound insulation and reverberation time for classrooms - Criteria in regulations and classification schemes in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2016-01-01

    Acoustic regulations or guidelines for schools exist in all five Nordic countries. The acoustic criteria depend on room uses and deal with airborne and impact sound insulation, reverberation time, sound absorption, traffic noise, service equipment noise and other acoustic performance...... have become more extensive and stricter during the last two decades. The paper focuses on comparison of sound insulation and reverberation time criteria for classrooms in regulations and classification schemes in the Nordic countries. Limit values and changes over time will be discussed as well as how...... not identical. The national criteria for quality level C correspond to the national regulations or recommendations for new-build. The quality levels A and B are intended to define better acoustic performance than C, and D lower performance. Typically, acoustic regulations and classification criteria for schools...

  16. Son et lumière: Sound and light effects on spatial distribution and swimming behavior in captive zebrafish.

    Science.gov (United States)

    Shafiei Sabet, Saeed; Van Dooren, Dirk; Slabbekoorn, Hans

    2016-05-01

    Aquatic and terrestrial habitats are heterogeneous by nature with respect to sound and light conditions. Fish may extract signals and exploit cues from both ambient modalities and they may also select their sound and light level of preference in free-ranging conditions. In recent decades, human activities in or near water have altered natural soundscapes and caused nocturnal light pollution to become more widespread. Artificial sound and light may cause anxiety, deterrence, disturbance or masking, but few studies have addressed in any detail how fishes respond to spatial variation in these two modalities. Here we investigated whether sound and light affected spatial distribution and swimming behavior of individual zebrafish that had a choice between two fish tanks: a treatment tank and a quiet and light escape tank. The treatments concerned a 2 × 2 design with noisy or quiet conditions and dim or bright light. Sound and light treatments did not induce spatial preferences for the treatment or escape tank, but caused various behavioral changes in both spatial distribution and swimming behavior within the treatment tank. Sound exposure led to more freezing and less time spent near the active speaker. Dim light conditions led to a lower number of crossings, more time spent in the upper layer and less time spent close to the tube for crossing. No interactions were found between sound and light conditions. This study highlights the potential relevance for studying multiple modalities when investigating fish behavior and further studies are needed to investigate whether similar patterns can be found for fish behavior in free-ranging conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Results of time-domain electromagnetic soundings in Everglades National Park, Florida

    Science.gov (United States)

    Fitterman, D.V.; Deszcz-Pan, Maria; Stoddard, C.E.

    1999-01-01

    This report describes the collection, processing, and interpretation of time-domain electromagnetic soundings from Everglades National Park. The results are used to locate the extent of seawater intrusion in the Biscayne aquifer and to map the base of the Biscayne aquifer in regions where well coverage is sparse. The data show no evidence of fresh, ground-water flows at depth into Florida Bay.

  18. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  19. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  20. Introducing the Oxford Vocal (OxVoc Sounds Database: A validated set of non-acted affective sounds from human infants, adults and domestic animals

    Directory of Open Access Journals (Sweden)

    Christine eParsons

    2014-06-01

    Full Text Available Sound moves us. Nowhere is this more apparent than in our responses to genuine emotional vocalisations, be they heartfelt distress cries or raucous laughter. Here, we present perceptual ratings and a description of a freely available, large database of natural affective vocal sounds from human infants, adults and domestic animals, the Oxford Vocal (OxVoc Sounds database. This database consists of 173 non-verbal sounds expressing a range of happy, sad and neutral emotional states. Ratings are presented for the sounds on a range of dimensions from a number of independent participant samples. Perceptions related to valence, including distress, vocaliser mood, and listener mood are presented in Study 1. Perceptions of the arousal of the sound, listener motivation to respond and valence (positive, negative are presented in Study 2. Perceptions of the emotional content of the stimuli in both Study 1 and Study 2 were consistent with the predefined categories (e.g., laugh stimuli perceived as positive. While the adult vocalisations received more extreme valence ratings, rated motivation to respond to the sounds was highest for the infant sounds. The major advantages of this database are the inclusion of vocalisations from naturalistic situations, which represent genuine expressions of emotion, and the inclusion of vocalisations from animals and infants, providing comparison stimuli for use in cross-species and developmental studies. The associated website provides a detailed description of the physical properties of the each sound stimulus along with cross-category descriptions.

  1. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  2. Time-domain electromagnetic soundings at the Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Frischknecht, F.C.; Raab, P.V.

    1984-01-01

    Structural discontinuities and variations in the resistivity of near-surface rocks often seriously distort dc resistivity and frequency-domain electromagnetic (FDEM) depth sounding curves. Reliable interpretation of such curves using one-dimensional (1-D) models is difficult or impossible. Short-offset time-domain electromagnetic (TDEM) sounding methods offer a number of advantages over other common geoelectrical sounding methods when working in laterally heterogeneous areas. In order to test the TDEM method in a geologically complex region, measurements were made on the east flank of Yucca Mountain at the Nevada Test Site (NTS). Coincident, offset coincident, single, and central loop configurations with square transmitting loops, either 305 or 152 m on a side, were used. Measured transient voltages were transformed into apparent resistivity values and then inverted in terms of 1-D models. Good fits to all of the offset coincident and single loop data were obtained using three-layer models. In most of the area, two well-defined interfaces were mapped, one which corresponds closely to a contact between stratigraphic units at a depth of about 400 m and another which corresponds to a transition from relatively unaltered to altered volcanic rocks at a depth of about 1000 m. In comparison with the results of a dipole-dipole resistivity survey, the results of the TDEM survey emphasize changes in the geoelectrical section with depth. Nonetheless, discontinuities in the layering mapped with the TDEM method delineated major faults or fault zones along the survey traverse. 5 refs., 10 figs., 1 tab

  3. Time-of-Flight Measurement of the Speed of Sound in a Metal Bar

    Science.gov (United States)

    Ganci, Salvatore

    2016-01-01

    A simple setup was designed for a "time-of-flight" measurement of the sound speed in a metal bar. The experiment requires low cost components and is very simple to understand by students. A good use of it is as a demonstration experiment.

  4. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  5. Music and Sound Elements in Time Estimation and Production of Children with Attention Deficit/Hyperactivity Disorder (ADHD

    Directory of Open Access Journals (Sweden)

    Luiz Rogerio Jorgensen Carrer

    2015-09-01

    Full Text Available ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants age 6 to 14 years, recruited at NANI-Unifesp/SP, sub-divided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds and time estimation with music. Results: 1. Performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms were statistically lower than control group (p<0,05; 2. In the task comparing musical excerpts of the same duration (7s, ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  6. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  7. A Real Time Differential GPS Tracking System for NASA Sounding Rockets

    Science.gov (United States)

    Bull, Barton; Bauer, Frank (Technical Monitor)

    2000-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads to several hundred miles in altitude. These missions return a variety of scientific data including: chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices to be used on satellites and other spacecraft prior to their use in these more expensive missions. Typically around thirty of these rockets are launched each year, from established ranges at Wallops Island, Virginia; Poker Flat Research Range, Alaska; White Sands Missile Range, New Mexico and from a number of ranges outside the United States. Many times launches are conducted from temporary launch ranges in remote parts of the world requiring considerable expense to transport and operate tracking radars. In order to support these missions, an inverse differential GPS system has been developed. The flight system consists of a small, inexpensive receiver, a preamplifier and a wrap-around antenna. A rugged, compact, portable ground station extracts GPS data from the raw payload telemetry stream, performs a real time differential solution and graphically displays the rocket's path relative to a predicted trajectory plot. In addition to generating a real time navigation solution, the system has been used for payload recovery, timing, data timetagging, precise tracking of multiple payloads and slaving of optical tracking systems for over the horizon acquisition. This paper discusses, in detail, the flight and ground hardware, as well as data processing and operational aspects of the system, and provides evidence of the system accuracy.

  8. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  9. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  10. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  11. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  12. Anti-bat tiger moth sounds: Form and function

    Directory of Open Access Journals (Sweden)

    Aaron J. CORCORAN, William E. CONNER, Jesse R. BARBER

    2010-06-01

    Full Text Available The night sky is the venue of an ancient acoustic battle between echolocating bats and their insect prey. Many tiger moths (Lepidoptera: Arctiidae answer the attack calls of bats with a barrage of high frequency clicks. Some moth species use these clicks for acoustic aposematism and mimicry, and others for sonar jamming, however, most of the work on these defensive functions has been done on individual moth species. We here analyze the diversity of structure in tiger moth sounds from 26 species collected at three locations in North and South America. A principal components analysis of the anti-bat tiger moth sounds reveals that they vary markedly along three axes: (1 frequency, (2 duty cycle (sound production per unit time and frequency modulation, and (3 modulation cycle (clicks produced during flexion and relaxation of the sound producing tymbal structure. Tiger moth species appear to cluster into two distinct groups: one with low duty cycle and few clicks per modulation cycle that supports an acoustic aposematism function, and a second with high duty cycle and many clicks per modulation cycle that is consistent with a sonar jamming function. This is the first evidence from a community-level analysis to support multiple functions for tiger moth sounds. We also provide evidence supporting an evolutionary history for the development of these strategies. Furthermore, cross-correlation and spectrogram correlation measurements failed to support a “phantom echo” mechanism underlying sonar jamming, and instead point towards echo interference [Current Zoology 56 (3: 358–369, 2010].

  13. Multi-feature snore sound analysis in obstructive sleep apnea–hypopnea syndrome

    International Nuclear Information System (INIS)

    Karunajeewa, Asela S; Abeyratne, Udantha R; Hukins, Craig

    2011-01-01

    Snoring is the most common symptom of obstructive sleep apnea hypopnea syndrome (OSAHS), which is a serious disease with high community prevalence. The standard method of OSAHS diagnosis, known as polysomnography (PSG), is expensive and time consuming. There is evidence suggesting that snore-related sounds (SRS) carry sufficient information to diagnose OSAHS. In this paper we present a technique for diagnosing OSAHS based solely on snore sound analysis. The method comprises a logistic regression model fed with snore parameters derived from its features such as the pitch and total airway response (TAR) estimated using a higher order statistics (HOS)-based algorithm. Pitch represents a time domain characteristic of the airway vibrations and the TAR represents the acoustical changes brought about by the collapsing upper airways. The performance of the proposed method was evaluated using the technique of K-fold cross validation, on a clinical database consisting of overnight snoring sounds of 41 subjects. The method achieved 89.3% sensitivity with 92.3% specificity (the area under the ROC curve was 0.96). These results establish the feasibility of developing a snore-based OSAHS community-screening device, which does not require any contact measurements

  14. How has the Long Island Sound Seafloor Changed Over Time?

    Science.gov (United States)

    Mayo, E. C.; Nitsche, F. O.

    2016-12-01

    The present Long Island Sound (LIS) was mainly shaped by the last glaciation and the sea level transgression that followed. Today the LIS is an important ecosystem that provides a critical habitat to numerous plant and animal species, and is important to the stability of several economies including fishing, boating, and tourism. Determining where erosion, transportation and deposition of sediment is occurring is important for sustainable development in and around the sound. Calculating the rate of change of the seafloor, identifying the hot spots where the most change is occurring, and determining which processes impact the scale of change are important for preserving the economy and ecology that depend on the sound. This is especially true as larger and more frequent storms comparable to hurricane Sandy are anticipated due to climate change. We used older bathymetric data (collected 1990-2001 by the National Oceanic and Atmospheric Administration) and compared those with the more recently collected LIS bathymetric data covering the same areas (collected 2012-2014 by a collaborative LIS mapping project with NOAA, the States of New York and Connecticut). Using Geographic Information Systems (GIS) we analyzed and mapped the differences between these two datasets to determine where and by how much the seafloor has changed. The results show observable changes in the LIS seafloor on the scale of 1-2 meters over this 10-20 year period. The scale and type of these changes varies across the sound. The rates of change observed depends on the area of the sound, as each area has different factors to account for that controls sediment movement. We present results from five areas of the sound that had data from 1990-2001 and 2012-2014 and that highlight different key processes that change the seafloor. Observed changes in tidal inlets are mostly controlled by existing morphology and near shore sediment transport. In areas with strong bottom currents the data show migrating

  15. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    Science.gov (United States)

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  16. A comparison of ambient casino sound and music: effects on dissociation and on perceptions of elapsed time while playing slot machines.

    Science.gov (United States)

    Noseworthy, Theodore J; Finlay, Karen

    2009-09-01

    This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.

  17. Association of Sound movements in space to Takete and Maluma

    DEFF Research Database (Denmark)

    Götzen, Amalia De

    2014-01-01

    the association is not cross-modal since both the stimuli are in the auditory domain, but the connection between words and sound movements is not trivial. A significant preference (twelve out of thirteen subjects) associated “takete” with the jagged sound movement and “maluma” with the round one. Colored noise...

  18. Accuracy of multi-point boundary crossing time analysis

    Directory of Open Access Journals (Sweden)

    J. Vogt

    2011-12-01

    Full Text Available Recent multi-spacecraft studies of solar wind discontinuity crossings using the timing (boundary plane triangulation method gave boundary parameter estimates that are significantly different from those of the well-established single-spacecraft minimum variance analysis (MVA technique. A large survey of directional discontinuities in Cluster data turned out to be particularly inconsistent in the sense that multi-point timing analyses did not identify any rotational discontinuities (RDs whereas the MVA results of the individual spacecraft suggested that RDs form the majority of events. To make multi-spacecraft studies of discontinuity crossings more conclusive, the present report addresses the accuracy of the timing approach to boundary parameter estimation. Our error analysis is based on the reciprocal vector formalism and takes into account uncertainties both in crossing times and in the spacecraft positions. A rigorous error estimation scheme is presented for the general case of correlated crossing time errors and arbitrary spacecraft configurations. Crossing time error covariances are determined through cross correlation analyses of the residuals. The principal influence of the spacecraft array geometry on the accuracy of the timing method is illustrated using error formulas for the simplified case of mutually uncorrelated and identical errors at different spacecraft. The full error analysis procedure is demonstrated for a solar wind discontinuity as observed by the Cluster FGM instrument.

  19. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  20. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    Science.gov (United States)

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  1. Sound Effects for Children's Comprehension of Variably-Paced Television Programs.

    Science.gov (United States)

    Calvert, Sandra L.; Scott, M. Catherine

    In this study, children's selective attention to, and comprehension of, variably-paced television programs were examined as a function of sound effects. Sixty-four children, equally distributed by sex and by preschool and fourth grades, were randomly assigned to one of four treatment conditions which crossed two levels of sound effects (presence…

  2. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  3. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  4. A Low Cost GPS System for Real-Time Tracking of Sounding Rockets

    Science.gov (United States)

    Markgraf, M.; Montenbruck, O.; Hassenpflug, F.; Turner, P.; Bull, B.; Bauer, Frank (Technical Monitor)

    2001-01-01

    This paper describes the development as well as the on-ground and the in-flight evaluation of a low cost Global Positioning System (GPS) system for real-time tracking of sounding rockets. The flight unit comprises a modified ORION GPS receiver and a newly designed switchable antenna system composed of a helical antenna in the rocket tip and a dual-blade antenna combination attached to the body of the service module. Aside from the flight hardware a PC based terminal program has been developed to monitor the GPS data and graphically displays the rocket's path during the flight. In addition an Instantaneous Impact Point (IIP) prediction is performed based on the received position and velocity information. In preparation for ESA's Maxus-4 mission, a sounding rocket test flight was carried out at Esrange, Kiruna, on 19 Feb. 2001 to validate existing ground facilities and range safety installations. Due to the absence of a dedicated scientific payload, the flight offered the opportunity to test multiple GPS receivers and assess their performance for the tracking of sounding rockets. In addition to the ORION receiver, an Ashtech G12 HDMA receiver and a BAE (Canadian Marconi) Allstar receiver, both connected to a wrap-around antenna, have been flown on the same rocket as part of an independent experiment provided by the Goddard Space Flight Center. This allows an in-depth verification and trade-off of different receiver and antenna concepts.

  5. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  6. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  7. Prevalence of high frequency hearing loss consistent with noise exposure among people working with sound systems and general population in Brazil: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Trevisani Virgínia FM

    2008-05-01

    Full Text Available Abstract Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians. A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although

  8. Analysis and Synthesis of Musical Instrument Sounds

    Science.gov (United States)

    Beauchamp, James W.

    For synthesizing a wide variety of musical sounds, it is important to understand which acoustic properties of musical instrument sounds are related to specific perceptual features. Some properties are obvious: Amplitude and fundamental frequency easily control loudness and pitch. Other perceptual features are related to sound spectra and how they vary with time. For example, tonal "brightness" is strongly connected to the centroid or tilt of a spectrum. "Attack impact" (sometimes called "bite" or "attack sharpness") is strongly connected to spectral features during the first 20-100 ms of sound, as well as the rise time of the sound. Tonal "warmth" is connected to spectral features such as "incoherence" or "inharmonicity."

  9. Film sound in preservation and presentation

    NARCIS (Netherlands)

    Campanini, S.

    2014-01-01

    What is the nature of film sound? How does it change through time? How can film sound be conceptually defined? To address these issues, this work assumes the perspective of film preservation and presentation practices, describing the preservation of early sound systems, as well as the presentation

  10. Identification of impact force acting on composite laminated plates using the radiated sound measured with microphones

    Science.gov (United States)

    Atobe, Satoshi; Nonami, Shunsuke; Hu, Ning; Fukunaga, Hisao

    2017-09-01

    Foreign object impact events are serious threats to composite laminates because impact damage leads to significant degradation of the mechanical properties of the structure. Identification of the location and force history of the impact that was applied to the structure can provide useful information for assessing the structural integrity. This study proposes a method for identifying impact forces acting on CFRP (carbon fiber reinforced plastic) laminated plates on the basis of the sound radiated from the impacted structure. Identification of the impact location and force history is performed using the sound pressure measured with microphones. To devise a method for identifying the impact location from the difference in the arrival times of the sound wave detected with the microphones, the propagation path of the sound wave from the impacted point to the sensor is examined. For the identification of the force history, an experimentally constructed transfer matrix is employed to relate the force history to the corresponding sound pressure. To verify the validity of the proposed method, impact tests are conducted by using a CFRP cross-ply laminate as the specimen, and an impulse hammer as the impactor. The experimental results confirm the validity of the present method for identifying the impact location from the arrival time of the sound wave detected with the microphones. Moreover, the results of force history identification show the feasibility of identifying the force history accurately from the measured sound pressure using the experimental transfer matrix.

  11. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  12. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  13. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  14. Analysis of sound pressure levels emitted by children's toys.

    Science.gov (United States)

    Sleifer, Pricila; Gonçalves, Maiara Santos; Tomasi, Marinês; Gomes, Erissandra

    2013-06-01

    To verify the levels of sound pressure emitted by non-certified children's toys. Cross-sectional study of sound toys available at popular retail stores of the so-called informal sector. Electronic, mechanical, and musical toys were analyzed. The measurement of each product was carried out by an acoustic engineer in an acoustically isolated booth, by a decibel meter. To obtain the sound parameters of intensity and frequency, the toys were set to produce sounds at a distance of 10 and 50cm from the researcher's ear. The intensity of sound pressure [dB(A)] and the frequency in hertz (Hz) were measured. 48 toys were evaluated. The mean sound pressure 10cm from the ear was 102±10 dB(A), and at 50cm, 94±8 dB(A), with ptoys was above 85dB(A). The frequency ranged from 413 to 6,635Hz, with 56.3% of toys emitting frequency higher than 2,000Hz. The majority of toys assessed in this research emitted a high level of sound pressure.

  15. Sound response of superheated drop bubble detectors to neutrons

    International Nuclear Information System (INIS)

    Gao Size; Chen Zhe; Liu Chao; Ni Bangfa; Zhang Guiying; Zhao Changfa; Xiao Caijin; Liu Cunxiong; Nie Peng; Guan Yongjing

    2012-01-01

    The sound response of the bubble detectors to neutrons by using 252 Cf neutron source was described. Sound signals were filtered by sound card and PC. The short-time signal energy. FFT spectrum, power spectrum, and decay time constant were got to determine the authenticity of sound signal for bubbles. (authors)

  16. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    Science.gov (United States)

    Kauahikaua, J.

    A controlled source, time domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The geoelectric structure was determined as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves are qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered earth Marquardt inversion computer program. The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm meters.

  17. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  18. Distorted eikonal cross sections: A time-dependent view

    International Nuclear Information System (INIS)

    Turner, R.E.

    1982-01-01

    For Hamiltonians with two potentials, differential cross sections are written as time-correlation functions of reference and distorted transition operators. Distorted eikonal differential cross sections are defined in terms of straight-line and reference classical trajectories. Both elastic and inelastic results are obtained. Expressions for the inelastic cross sections are presented in terms of time-ordered cosine and sine memory functions through the use of the Zwanzig-Feshbach projection-operator method

  19. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  20. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  1. Influence of impedance phase angle on sound pressures and reverberation times in a rectangular room

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Lee, Doheon; Santurette, Sébastien

    2014-01-01

    , but with an absorptive ceiling are investigated. The zero phase angle, which has commonly been assumed in practice, is regarded as reference and differences in the sound pressure level and early decay time from the reference are quantified. As expected, larger differences in the room acoustic parameters are found...

  2. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  3. [Effect of early scream sound stress on learning and memory in female rats].

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Huang, Chen

    2015-12-01

    To investigate the effect of early scream sound stress on the ability of spatial learning and memory, the levels of norepinephrine (NE) and corticosterone (CORT) in serum, and the morphology of adrenal gland.
 Female Sprague-Dawley (SD) rats were treated daily with scream sound from postnatal day 1(P1) for 21 d. Morris water maze was used to measure the spatial learning and memory ability. The levels of serum NE and CORT were determined by radioimmunoassay. Adrenal gland of SD rats was collected and fixed in formalin, and then embedded with paraffin. The morphology of adrenal gland was observed by HE staining.
 Exposure to early scream sound decreased latency of escape and increased times to cross the platform in Morris water maze test (Psound stress can enhance spatial learning and memory ability in adulthood, which is related to activation of the hypothalamo-pituitary-adrenal axis and sympathetic nervous system.

  4. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  5. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  6. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  7. Improved methods for nightside time domain Lunar Electromagnetic Sounding

    Science.gov (United States)

    Fuqua-Haviland, H.; Poppe, A. R.; Fatemi, S.; Delory, G. T.; De Pater, I.

    2017-12-01

    Time Domain Electromagnetic (TDEM) Sounding isolates induced magnetic fields to remotely deduce material properties at depth. The first step of performing TDEM Sounding at the Moon is to fully characterize the dynamic plasma environment, and isolate geophysically induced currents from concurrently present plasma currents. The transfer function method requires a two-point measurement: an upstream reference measuring the pristine solar wind, and one downstream near the Moon. This method was last performed during Apollo assuming the induced fields on the nightside of the Moon expand as in an undisturbed vacuum within the wake cavity [1]. Here we present an approach to isolating induction and performing TDEM with any two point magnetometer measurement at or near the surface of the Moon. Our models include a plasma induction model capturing the kinetic plasma environment within the wake cavity around a conducting Moon, and a geophysical forward model capturing induction in a vacuum. The combination of these two models enable the analysis of magnetometer data within the wake cavity. Plasma hybrid models use the upstream plasma conditions and interplanetary magnetic field (IMF) to capture the wake current systems formed around the Moon. The plasma kinetic equations are solved for ion particles with electrons as a charge-neutralizing fluid. These models accurately capture the large scale lunar wake dynamics for a variety of solar wind conditions: ion density, temperature, solar wind velocity, and IMF orientation [2]. Given the 3D orientation variability coupled with the large range of conditions seen within the lunar plasma environment, we characterize the environment one case at a time. The global electromagnetic induction response of the Moon in a vacuum has been solved numerically for a variety of electrical conductivity models using the finite-element method implemented within the COMSOL software. This model solves for the geophysically induced response in vacuum to

  8. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  9. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  10. Interpretation of time-domain electromagnetic soundings in the Calico Hills area, Nevada Test Site, Nye County, Nevada

    International Nuclear Information System (INIS)

    Kauahikaua, J.

    1981-01-01

    A controlled source, time-domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The goal of this survey was the determination of the geoelectric structure as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high-level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves can be qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered-earth Marquardt inversion computer program (Kauahikaua, 1980). The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm-meters

  11. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  12. Comparison of Travel-Time and Amplitude Measurements for Deep-Focusing Time-Distance Helioseismology

    Science.gov (United States)

    Pourabdian, Majid; Fournier, Damien; Gizon, Laurent

    2018-04-01

    The purpose of deep-focusing time-distance helioseismology is to construct seismic measurements that have a high sensitivity to the physical conditions at a desired target point in the solar interior. With this technique, pairs of points on the solar surface are chosen such that acoustic ray paths intersect at this target (focus) point. Considering acoustic waves in a homogeneous medium, we compare travel-time and amplitude measurements extracted from the deep-focusing cross-covariance functions. Using a single-scattering approximation, we find that the spatial sensitivity of deep-focusing travel times to sound-speed perturbations is zero at the target location and maximum in a surrounding shell. This is unlike the deep-focusing amplitude measurements, which have maximum sensitivity at the target point. We compare the signal-to-noise ratio for travel-time and amplitude measurements for different types of sound-speed perturbations, under the assumption that noise is solely due to the random excitation of the waves. We find that, for highly localized perturbations in sound speed, the signal-to-noise ratio is higher for amplitude measurements than for travel-time measurements. We conclude that amplitude measurements are a useful complement to travel-time measurements in time-distance helioseismology.

  13. New theory on the reverberation of rooms. [considering sound wave travel time

    Science.gov (United States)

    Pujolle, J.

    1974-01-01

    The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.

  14. Cognitive Control of Involuntary Distraction by Deviant Sounds

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Hebrero, Maria

    2013-01-01

    It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…

  15. Cross-modal versus within-modal recall: differences in behavioral and brain responses.

    Science.gov (United States)

    Butler, Andrew J; James, Karin H

    2011-10-31

    Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Left and right reaction time differences to the sound intensity in normal and AD/HD children.

    Science.gov (United States)

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza

    2017-06-01

    Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.

  17. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  18. Underwater Sound Propagation from Marine Pile Driving.

    Science.gov (United States)

    Reyff, James A

    2016-01-01

    Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.

  19. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  20. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  1. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  2. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  3. Phonaesthemes and sound symbolism in Swedish brand names

    Directory of Open Access Journals (Sweden)

    Åsa Abelin

    2015-01-01

    Full Text Available This study examines the prevalence of sound symbolism in Swedish brand names. A general principle of brand name design is that effective names should be distinctive, recognizable, easy to pronounce and meaningful. Much money is invested in designing powerful brand names, where the emotional impact of the names on consumers is also relevant and it is important to avoid negative connotations. Customers prefer brand names, which say something about the product, as this reduces product uncertainty (Klink, 2001. Therefore, consumers might prefer sound symbolic names. It has been shown that people associate the sounds of the nonsense words maluma and takete with round and angular shapes, respectively. By extension, more complex shapes and textures might activate words containing certain sounds. This study focuses on semantic dimensions expected to be relevant to product names, such as mobility, consistency, texture and shape. These dimensions are related to the senses of sight, hearing and touch and are also interesting from a cognitive linguistic perspective. Cross-modal assessment and priming experiments with pictures and written words were performed and the results analysed in relation to brand name databases and to sound symbolic sound combinations in Swedish (Abelin, 1999. The results show that brand names virtually never contain pejorative, i.e. depreciatory, consonant clusters, and that certain sounds and sound combinations are overrepresented in certain content categories. Assessment tests show correlations between pictured objects and phoneme combinations in newly created words (non-words. The priming experiment shows that object images prime newly created words as expected, based on the presence of compatible consonant clusters.

  4. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  5. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  6. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  7. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  8. Concepts for evaluation of sound insulation of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  9. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  10. Measuring cross-border travel times for freight : Otay Mesa international border crossing.

    Science.gov (United States)

    2010-09-01

    Cross border movement of people and goods is a vital part of the North American economy. Accurate real-time data on travel times along the US-Mexico border can help generate a range of tangible benefits covering improved operations and security, lowe...

  11. Models for Pooled Time-Series Cross-Section Data

    Directory of Open Access Journals (Sweden)

    Lawrence E Raffalovich

    2015-07-01

    Full Text Available Several models are available for the analysis of pooled time-series cross-section (TSCS data, defined as “repeated observations on fixed units” (Beck and Katz 1995. In this paper, we run the following models: (1 a completely pooled model, (2 fixed effects models, and (3 multi-level/hierarchical linear models. To illustrate these models, we use a Generalized Least Squares (GLS estimator with cross-section weights and panel-corrected standard errors (with EViews 8 on the cross-national homicide trends data of forty countries from 1950 to 2005, which we source from published research (Messner et al. 2011. We describe and discuss the similarities and differences between the models, and what information each can contribute to help answer substantive research questions. We conclude with a discussion of how the models we present may help to mitigate validity threats inherent in pooled time-series cross-section data analysis.

  12. First and second sound in He films

    International Nuclear Information System (INIS)

    Oh, H.G.; Um, C.I.; Kahng, W.H.; Isihara, A.

    1986-01-01

    In consideration of a collision integral in the Boltzmann equation and with use of kinetic and hydrodynamical equations, the velocities of the first and second sound in liquid 4 He films are evaluated as functions of temperature, and the attenuation coefficients are obtained. The second sound is 2/sup -1/2/ times the first-sound velocity in the low-temperature and low-frequency limit

  13. A Model to Determine the Level of Serum Aldosterone in the Workers Attributed to the Combined Effects of Sound Pressure Level, Exposure Time and Serum Potassium Level: A Field-Based Study

    Directory of Open Access Journals (Sweden)

    Parvin Nassiri

    2016-09-01

    Full Text Available Background Occupational exposure to excessive noise is one of the biggest work-related challenges in the world. This phenomenon causes the release of stress-related hormones, which in turn, negatively affects cardiovascular risk factors. Objectives The current study study aimed to determine the level of workers’ serum aldosterone in light of the combined effect of sound pressure level, exposure time and serum potassium level. Methods This cross-sectional, descriptive, analytical study was conducted on 45 workers of Gol-Gohar Mining and Industrial Company in the fall of 2014. The subjects were divided into three groups (one control and two case groups, each including 15 workers. Participants in the control group were selected from workers with administrative jobs (exposure to the background noise. On the other hand, participants in the case groups were selected from the concentrator and pelletizing factories exposed to excessive noise. Serum aldosterone and potassium levels of participants were assessed at three different time intervals: at the beginning of the shift and before exposure to noise (7:30 - 8:00 AM, during exposure to noise (10:00 - 10:30 AM, and during continuous exposure (1:30 - 2:00 PM. The obtained data were transferred into SPSS ver. 18. Repeated measures analysis of variance (ANOVA was used to develop the statistical model of workers’ aldosterone level in light of the combined effect of sound pressure level, exposure time, and serum potassium level. Results The results of the final statistical model to determine the level of serum aldosterone based on the combined effect of sound pressure level, exposure time and serum potassium level indicated that the sound pressure level had a significant influence on the human’s serum aldosterone level (P = 0.04. In addition, the effects of exposure time and serum potassium on aldosterone level were statistically significant with P-values of 0.008 and 0.001, respectively. Conclusions

  14. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  15. Cross-correlation time-of-flight analysis of molecular beam scattering

    International Nuclear Information System (INIS)

    Nowikow, C.V.; Grice, R.

    1979-01-01

    The theory of the cross-correlation method of time-of-flight analysis is presented in a form which highlights its formal similarity to the conventional method. A time-of-flight system for the analysis of crossed molecular beam scattering is described, which is based on a minicomputer interface and can operate in both the cross-correlation and conventional modes. The interface maintains the synchronisation of chopper disc rotation and channel advance indefinitely in the cross-correlation method and can acquire data in phase with the beam modulation in both methods. The shutter function of the cross-correlation method is determined and the deconvolution analysis of the data is discussed. (author)

  16. Comprehensive measures of sound exposures in cinemas using smart phones.

    Science.gov (United States)

    Huth, Markus E; Popelka, Gerald R; Blevins, Nikolas H

    2014-01-01

    Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on

  17. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  18. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  19. A unified approach for the spatial enhancement of sound

    Science.gov (United States)

    Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann

    2005-09-01

    This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.

  20. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  1. Zero-crossing statistics for non-Markovian time series.

    Science.gov (United States)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  2. Zero-crossing statistics for non-Markovian time series

    Science.gov (United States)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  3. Second harmonic sound field after insertion of a biological tissue sample

    Science.gov (United States)

    Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo

    2002-01-01

    Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.

  4. Parallel-plate third sound waveguides with fixed and variable plate spacings for the study of fifth sound in superfluid helium

    International Nuclear Information System (INIS)

    Jelatis, G.J.

    1983-01-01

    Third sound in superfluid helium four films has been investigated using two parallel-plate waveguides. These investigations led to the observation of fifth sound, a new mode of sound propagation. Both waveguides consisted of two parallel pieces of vitreous quartz. The sound speed was obtained by measuring the time-of-flight of pulsed third sound over a known distance. Investigations from 1.0-1.7K were possible with the use of superconducting bolometers, which measure the temperature component of the third sound wave. Observations were initially made with a waveguide having a plate separation fixed at five microns. Adiabatic third sound was measured in the geometry. Isothermal third sound was also observed, using the usual, single-substrate technique. Fifth sound speeds, calculated from the two-fluid theory of helium and the speeds of the two forms of third sound, agreed in size and temperature dependence with theoretical predictions. Nevertheless, only equivocal observations of fifth sound were made. As a result, the film-substrate interaction was examined, and estimates of the Kapitza conductance were made. Assuming the dominance of the effects of this conductance over those due to the ECEs led to a new expression for fifth sound. A reanalysis of the initial data was made, which contained no adjustable parameters. The observation of fifth sound was seen to be consistent with the existence of an anomalously low boundary conductance

  5. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  6. Patterns and risk factors associated with speech sounds and language disorders in pakistan

    International Nuclear Information System (INIS)

    Arshad, H.; Ghayas, M.S.; Madiha, A.

    2013-01-01

    To observe the patterns of speech sounds and language disorders. To find out associated risk factors of speech sounds and language disorders. Background: Communication is the very essence of modern society. Communication disorders impacts quality of life. Patterns and factors associated with speech sounds and language impairments were explored. The association was seen with different environmental factors. Methodology: The patients included in the study were 200 whose age ranged between two and sixteen years presented in speech therapy clinic OPD Mayo Hospital. A cross-sectional survey questionnaire assessed the patient's bio data, socioeconomic background, family history of communication disorders and bilingualism. It was a descriptive study and was conducted through cross-sectional survey. Data was analysed by SPSS version 16. Results: Results reveal Language disorders were relatively more prevalent in males than those of speech sound disorders. Bilingualism was found as having insignificant effect on these disorders. It was concluded from this study that the socioeconomic status and family history were significant risk factors. Conclusion: Gender, socioeconomic status, family history can play as risk for developing speech sounds and language disorders. There is a grave need to understand patterns of communication disorders in the light of Pakistani society and culture. It is recommended to conduct further studies to determine risk factors and patterns of these impairments. (author)

  7. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  8. A stethoscope with wavelet separation of cardiac and respiratory sounds for real time telemedicine implemented on field-programmable gate array

    Science.gov (United States)

    Castro, Víctor M.; Muñoz, Nestor A.; Salazar, Antonio J.

    2015-01-01

    Auscultation is one of the most utilized physical examination procedures for listening to lung, heart and intestinal sounds during routine consults and emergencies. Heart and lung sounds overlap in the thorax. An algorithm was used to separate them based on the discrete wavelet transform with multi-resolution analysis, which decomposes the signal into approximations and details. The algorithm was implemented in software and in hardware to achieve real-time signal separation. The heart signal was found in detail eight and the lung signal in approximation six. The hardware was used to separate the signals with a delay of 256 ms. Sending wavelet decomposition data - instead of the separated full signa - allows telemedicine applications to function in real time over low-bandwidth communication channels.

  9. Sound propagation in the steam generator - A theoretical approach

    International Nuclear Information System (INIS)

    Heckl, M.

    1990-01-01

    In order to assess the suitability of acoustic tomography in the steam generator, detailed information on its acoustic transmission properties is needed. We have developed a model which allows one to calculate the sound field produced by an incident wave in the steam generator. In our model we consider the steam generator as a medium consisting of a two-dimensional array of infinitely long cylindrical tubes. They are thin-walled, made of metal and are immersed in a liquid. Inside them there is a liquid or a gas. The incident wave is plane and perpendicular to the cylindrical tubes. When a sound wave crosses the tube bundle, each individual tube is exposed to a fluctuating pressure field and scatters sound which, together with the incident wave, influences the pressure at all surrounding tubes. The motion of an individual tube is given by differential equations (Heckl 1989) and the pressure difference between inside and outside. The interaction of a tube wall with the fluid inside and outside is treated by imposing suitable boundary conditions. Since the cylinder array is periodic, it can be considered as consisting of a large number of tube rows with a constant distance between adjacent cylinders within a row and constant spacing of the rows. The sound propagates from row to row, each time getting partly transmitted and partly reflected. A single row is similar to a diffraction grating known from optics. The transmission properties of one row or grating depend on the ratio between spacing and wavelength. If the wavelength is larger than the spacing, then the wave is transmitted only in the original direction. However, for wavelengths smaller than the spacing, the transmitted wave has components travelling in several discrete directions. The response of one row to sound scattered from a neighbouring row is calculated from Kirchhoff's theorem. An iteration scheme has been developed to take the reflection and transmission at several rows into account. 7 refs, figs and

  10. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  11. Time measurements with a mobile device using sound

    Science.gov (United States)

    Wisman, Raymond F.; Spahn, Gabriel; Forinash, Kyle

    2018-05-01

    Data collection is a fundamental skill in science education, one that students generally practice in a controlled setting using equipment only available in the classroom laboratory. However, using smartphones with their built-in sensors and often free apps, many fundamental experiments can be performed outside the laboratory. Taking advantage of these tools often require creative approaches to data collection and exploring alternative strategies for experimental procedures. As examples, we present several experiments using smartphones and apps that record and analyze sound to measure a variety of physical properties.

  12. Sample path analysis and distributions of boundary crossing times

    CERN Document Server

    Zacks, Shelemyahu

    2017-01-01

    This monograph is focused on the derivations of exact distributions of first boundary crossing times of Poisson processes, compound Poisson processes, and more general renewal processes.  The content is limited to the distributions of first boundary crossing times and their applications to various stochastic models. This book provides the theory and techniques for exact computations of distributions and moments of level crossing times. In addition, these techniques could replace simulations in many cases, thus providing more insight about the phenomenona studied. This book takes a general approach for studying telegraph processes and is based on nearly thirty published papers by the author and collaborators over the past twenty five years.  No prior knowledge of advanced probability is required, making the book widely available to students and researchers in applied probability, operations research, applied physics, and applied mathematics. .

  13. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  14. Profile temperature, salinity, and hydrostatic pressure from CTD casts in McMurdo Sound from 2011-11-26 to 2011-12-03 (NCEI Accession 0131073)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Full-depth CTD profiles taken on along-sound and cross-sound transects of McMurdo Sound. Eleven stations with six independent sites were visited.

  15. Prototype electronic stethoscope vs. conventional stethoscope for auscultation of heart sounds.

    Science.gov (United States)

    Kelmenson, Daniel A; Heath, Janae K; Ball, Stephanie A; Kaafarani, Haytham M A; Baker, Elisabeth M; Yeh, Daniel D; Bittner, Edward A; Eikermann, Matthias; Lee, Jarone

    2014-08-01

    In an effort to decrease the spread of hospital-acquired infections, many hospitals currently use disposable plastic stethoscopes in patient rooms. As an alternative, this study examines a prototype electronic stethoscope that does not break the isolation barrier between clinician and patient and may also improve the diagnostic accuracy of the stethoscope exam. This study aimed to investigate whether the new prototype electronic stethoscope improved auscultation of heart sounds compared to the standard conventional isolation stethoscope. In a controlled, non-blinded, cross-over study, clinicians were randomized to identify heart sounds with both the prototype electronic stethoscope and a conventional stethoscope. The primary outcome was the score on a 10-question heart sound identification test. In total, 41 clinicians completed the study. Subjects performed significantly better in the identification of heart sounds when using the prototype electronic stethoscope (median = 9 [7-10] vs. 8 [6-9] points, p value prototype electronic stethoscope. Clinicians using a new prototype electronic stethoscope achieved greater accuracy in identification of heart sounds and also universally favoured the new device, compared to the conventional stethoscope.

  16. A Mixed-Methods Trial of Broad Band Noise and Nature Sounds for Tinnitus Therapy: Group and Individual Responses Modeled under the Adaptation Level Theory of Tinnitus.

    Science.gov (United States)

    Durai, Mithila; Searchfield, Grant D

    2017-01-01

    Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important

  17. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    Science.gov (United States)

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. The NASA Sounding Rocket Program and space sciences

    Science.gov (United States)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  19. Enhancement of acoustical performance of hollow tube sound absorber

    International Nuclear Information System (INIS)

    Putra, Azma; Khair, Fazlin Abd; Nor, Mohd Jailani Mohd

    2016-01-01

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  20. Enhancement of acoustical performance of hollow tube sound absorber

    Energy Technology Data Exchange (ETDEWEB)

    Putra, Azma, E-mail: azma.putra@utem.edu.my; Khair, Fazlin Abd, E-mail: fazlinabdkhair@student.utem.edu.my; Nor, Mohd Jailani Mohd, E-mail: jai@utem.edu.my [Centre for Advanced Research on Energy, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, Durian Tunggal Melaka 76100 Malaysia (Malaysia)

    2016-03-29

    This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.

  1. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    Science.gov (United States)

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  2. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  3. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  4. Zero sound and quasiwave: separation in the magnetic field

    International Nuclear Information System (INIS)

    Bezuglyj, E.V.; Bojchuk, A.V.; Burma, N.G.; Fil', V.D.

    1995-01-01

    Theoretical and experimental results on the behavior of the longitudinal and transverse electron sound in a weak magnetic field are presented. It is shown theoretically that the effects of the magnetic field on zero sound velocity and ballistic transfer are opposite in sign and have sufficiently different dependences on the sample width, excitation frequency and relaxation time. This permits us to separate experimentally the Fermi-liquid and ballistic contributions in the electron sound signals. For the first time the ballistic transfer of the acoustic excitation by the quasiwave has been observed in zero magnetic field

  5. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  6. An experimental study on the aeroacoustics of wall-bounded flows : Sound emission from a wall-mounted cavity, coupling of time-resolved PIV and acoustic analogies

    NARCIS (Netherlands)

    Koschatzky, V.

    2011-01-01

    This thesis deals with the problem of noise. Sound is a constant presence in our lives. Most of the times it is something wanted and it serves a purpose, such as communication through speech or entertainment by listening to music. On the other hand, quite often sound is an annoying and unwanted

  7. Sound of Paddle Wheel on Sea Bass Growth

    Directory of Open Access Journals (Sweden)

    Jafri Din

    2009-04-01

    Full Text Available The objective of this research is sound effect for brackish water for Sea bass (Cynoscion nobilis. Breeding farm 25x100m, 2m of depth, and 6 paddle wheels which generate the sound are available for research. Sound profile has been measured to investigate the amplitude at various measurement points at various depths by using Cetacean hydrophone C304. The output of hydrophone has been analyzed by using SpectraPlus software. For the second measurement, two cages which size 3x3m have been used for life fish habitat. Then, fish put in the edge cage (20, center cage (20, and out of cage (12500. Sound profile has been measured for position-based (edge/center cage, time-based (morning/noon/evening, and point-based. Time series, spectrum frequency, and phase have been analysis. Fish growth progress has been monthly measured at every cage. Fish in the cage is growth as linearly, while fish growth for out of cage is exponentially. Size and weight of fish in the both cages is less than out of cage. This research concludes that sound have no significantly effect for fish growth. Limited mobility to look for food and stress are more influences to fish growth than sound effect.

  8. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  9. Seeing the sound after visual loss: functional MRI in acquired auditory-visual synesthesia.

    Science.gov (United States)

    Yong, Zixin; Hsieh, Po-Jang; Milea, Dan

    2017-02-01

    Acquired auditory-visual synesthesia (AVS) is a rare neurological sign, in which specific auditory stimulation triggers visual experience. In this study, we used event-related fMRI to explore the brain regions correlated with acquired monocular sound-induced phosphenes, which occurred 2 months after unilateral visual loss due to an ischemic optic neuropathy. During the fMRI session, 1-s pure tones at various pitches were presented to the patient, who was asked to report occurrence of sound-induced phosphenes by pressing one of the two buttons (yes/no). The brain activation during phosphene-experienced trials was contrasted with non-phosphene trials and compared to results obtained in one healthy control subject who underwent the same fMRI protocol. Our results suggest, for the first time, that acquired AVS occurring after visual impairment is associated with bilateral activation of primary and secondary visual cortex, possibly due to cross-wiring between auditory and visual sensory modalities.

  10. Sound Performance – Experience and Event

    DEFF Research Database (Denmark)

    Holmboe, Rasmus

    . The present paper draws on examples from my ongoing PhD-project, which is connected to Museum of Contemporary Art in Roskilde, Denmark, where I curate a sub-programme at ACTS 2014 – a festival for performative arts. The aim is to investigate, how sound performance can be presented and represented - in real....... In itself – and as an artistic material – sound is always already process. It involves the listener in a situation that is both filled with elusive presence and one that evokes rooted memory. At the same time sound is bodily, social and historical. It propagates between individuals and objects, it creates...

  11. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  12. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  13. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  14. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  15. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  16. Ultrahromatizm as a Sound Meditation

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-08-01

    Full Text Available The article scientifically substantiates the insights on the theory and the practice of using microchromatic in modern musical art, defines compositional and expressive possibilities of microtonal system in the works of composers of XXI century. It justifies the author's interpretation of the concept of “ultrahromatizm”, as a principle of musical thinking, which is connected with the sound space conception as the space-time continuum. The paper identifies the correlation of the notions “microchromatism” and “ultrahromatizm”. If microchromosome is understood, first and for most, as the technique of dividing the sound into microparticles, ultrahromatizm is interpreted as the principle of musical and artistic consciousness, as the musical focus of consciousness on the formation of the specific model of sound meditation and understanding of the world.

  17. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.

    Science.gov (United States)

    Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.

  18. Examination of Cross-Scale Coupling During Auroral Events using RENU2 and ISINGLASS Sounding Rocket Data.

    Science.gov (United States)

    Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.

    2017-12-01

    The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.

  19. Decay of reverberant sound in a spherical enclosure

    International Nuclear Information System (INIS)

    Carroll, M.M.; Chien, C.F.

    1977-01-01

    The assumption of diffuse reflection (Lambert's Law) leads to integral equations for the wall intensity in a reverberant sound field in the steady state and during decay. The latter equation, in the special case of a spherical enclosure with uniformly absorbent walls and uniform wall intensity, allows exponential decay with a decay time which agrees closely with the Norris--Eyring prediction. The sound-intensity and sound-energy density in the medium, during decay, are also calculated

  20. Problems in nonlinear acoustics: Pulsed finite amplitude sound beams, nonlinear acoustic wave propagation in a liquid layer, nonlinear effects in asymmetric cylindrical sound beams, effects of absorption on the interaction of sound beams, and parametric receiving arrays

    Science.gov (United States)

    Hamilton, Mark F.

    1990-12-01

    This report discusses five projects all of which involve basic theoretical research in nonlinear acoustics: (1) pulsed finite amplitude sound beams are studied with a recently developed time domain computer algorithm that solves the KZK nonlinear parabolic wave equation; (2) nonlinear acoustic wave propagation in a liquid layer is a study of harmonic generation and acoustic soliton information in a liquid between a rigid and a free surface; (3) nonlinear effects in asymmetric cylindrical sound beams is a study of source asymmetries and scattering of sound by sound at high intensity; (4) effects of absorption on the interaction of sound beams is a completed study of the role of absorption in second harmonic generation and scattering of sound by sound; and (5) parametric receiving arrays is a completed study of parametric reception in a reverberant environment.

  1. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  2. Task-irrelevant novel sounds improve attentional performance in children with and without ADHD

    Directory of Open Access Journals (Sweden)

    Jana eTegelbeckers

    2016-01-01

    Full Text Available Task-irrelevant salient stimuli involuntarily capture attention and can lead to distraction from an ongoing task, especially in children with ADHD. However, there has been tentative evidence that the presentation of novel sounds can have beneficial effects on cognitive performance. In the present study, we aimed to investigate the influence of novel sounds compared to no sound and a repeatedly presented standard sound on attentional performance in children and adolescents with and without ADHD. We therefore had 32 patients with ADHD and 32 typically developing children and adolescents (8 to 13 years executed a flanker task in which each trial was preceded either by a repeatedly presented standard sound (33%, an unrepeated novel sound (33% or no auditory stimulation (33%. Task-irrelevant novel sounds facilitated attentional performance similarly in children with and without ADHD, as indicated by reduced omission error rates, reaction times, and reaction time variability without compromising performance accuracy. By contrast, standard sounds, while also reducing omission error rates and reaction times, led to increased commission error rates. Therefore, the beneficial effect of novel sounds exceeds cueing of the target display by potentially increased alerting and/or enhanced behavioral control.

  3. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    Science.gov (United States)

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  4. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  5. Letter-sound processing deficits in children with developmental dyslexia: An ERP study.

    Science.gov (United States)

    Moll, Kristina; Hasko, Sandra; Groth, Katharina; Bartling, Jürgen; Schulte-Körne, Gerd

    2016-04-01

    The time course during letter-sound processing was investigated in children with developmental dyslexia (DD) and typically developing (TD) children using electroencephalography. Thirty-eight children with DD and 25 TD children participated in a visual-auditory oddball paradigm. Event-related potentials (ERPs) elicited by standard and deviant stimuli in an early (100-190 ms) and late (560-750 ms) time window were analysed. In the early time window, ERPs elicited by the deviant stimulus were delayed and less left lateralized over fronto-temporal electrodes for children with DD compared to TD children. In the late time window, children with DD showed higher amplitudes extending more over right frontal electrodes. Longer latencies in the early time window and stronger right hemispheric activation in the late time window were associated with slower reading and naming speed. Additionally, stronger right hemispheric activation in the late time window correlated with poorer phonological awareness skills. Deficits in early stages of letter-sound processing influence later more explicit cognitive processes during letter-sound processing. Identifying the neurophysiological correlates of letter-sound processing and their relation to reading related skills provides insight into the degree of automaticity during letter-sound processing beyond behavioural measures of letter-sound-knowledge. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  7. SOUND-SPEED INVERSION OF THE SUN USING A NONLOCAL STATISTICAL CONVECTION THEORY

    International Nuclear Information System (INIS)

    Zhang Chunguang; Deng Licai; Xiong Darun; Christensen-Dalsgaard, Jørgen

    2012-01-01

    Helioseismic inversions reveal a major discrepancy in sound speed between the Sun and the standard solar model just below the base of the solar convection zone. We demonstrate that this discrepancy is caused by the inherent shortcomings of the local mixing-length theory adopted in the standard solar model. Using a self-consistent nonlocal convection theory, we construct an envelope model of the Sun for sound-speed inversion. Our solar model has a very smooth transition from the convective envelope to the radiative interior, and the convective energy flux changes sign crossing the boundaries of the convection zone. It shows evident improvement over the standard solar model, with a significant reduction in the discrepancy in sound speed between the Sun and local convection models.

  8. Effect which environmental sound causes for memory retrieval

    OpenAIRE

    武良,徹文

    1999-01-01

    The research judges the relation and no relation between the stimulation at prime and the target stimulation with hearing of pleasantness and an unpleasant sound. It is a purpose what influence for you give to examine a reaction time and a miss-rate of responding as an index when the problem is accomplished. The subjects are 50 university students to the graduate school students from one year. The subject was distributed to 21 pleasant sound condition groups, 10 unpleasant sound condition gro...

  9. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  10. [A focused sound field measurement system by LabVIEW].

    Science.gov (United States)

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  11. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  12. Sound modes in hot nuclear matter

    International Nuclear Information System (INIS)

    Kolomietz, V. M.; Shlomo, S.

    2001-01-01

    The propagation of the isoscalar and isovector sound modes in a hot nuclear matter is considered. The approach is based on the collisional kinetic theory and takes into account the temperature and memory effects. It is shown that the sound velocity and the attenuation coefficient are significantly influenced by the Fermi surface distortion (FSD). The corresponding influence is much stronger for the isoscalar mode than for the isovector one. The memory effects cause a nonmonotonous behavior of the attenuation coefficient as a function of the relaxation time leading to a zero-to-first sound transition with increasing temperature. The mixing of both the isoscalar and the isovector sound modes in an asymmetric nuclear matter is evaluated. The condition for the bulk instability and the instability growth rate in the presence of the memory effects is studied. It is shown that both the FSD and the relaxation processes lead to a shift of the maximum of the instability growth rate to the longer-wavelength region

  13. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  14. Red light crossing, transportation time and attitudes in crossing with intelligent green light for pedestrians

    DEFF Research Database (Denmark)

    Øhlenschlæger, Rasmus; Tønning, Charlotte; Andersen, Camilla Sloth

    2018-01-01

    In order to increase mobility and promote modal shift to walking, intersections in the city of Aarhus, Denmark, have been equipped with intelligent management of green light for pedestrians. This allows adjustment of green time based on radar detection of pedestrians in the crossing...... and prolongation of the green time for the pedestrians if required. The effect is examined in a before/after study of a two-stage pedestrian crossing with a centre refuge island in an intersection of four-lane roads. The data consists of responses from an on-site questionnaire including 72+53 individuals and 266...

  15. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  16. The organization of words and environmental sounds in memory.

    Science.gov (United States)

    Hendrickson, Kristi; Walenski, Matthew; Friend, Margaret; Love, Tracy

    2015-03-01

    In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300-700ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300-400ms) and late (500-700ms) time windows, though a graded pattern similar to that of words was seen in the mid-latency time window (400-500ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds. Published by Elsevier Ltd.

  17. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  18. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  19. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  20. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  1. Damage Detection Based on Cross-Term Extraction from Bilinear Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Ma Yuchao

    2014-01-01

    Full Text Available Abundant damage information is implicated in the bilinear time-frequency distribution of structural dynamic signals, which could provide effective support for structural damage identification. Signal time-frequency analysis methods are reviewed, and the characters of linear time-frequency distribution and bilinear time-frequency distribution typically represented by the Wigner-Ville distribution are compared. The existence of the cross-term and its application in structural damage detection are demonstrated. A method of extracting the dominant term is proposed, which combines the short-time Fourier spectrum and Wigner-Ville distribution; then two-dimensional time-frequency transformation matrix is constructed and the complete cross-term is extracted finally. The distribution character of which could be applied to the structural damage identification. Through theoretical analysis, model experiment and numerical simulation of the girder structure, the change rate of cross-term amplitude is validated to identify the damage location and degree. The effectiveness of the cross-term of bilinear time-frequency distribution for damage detection is confirmed and the analytical method of damage identification used in structural engineering is available.

  2. Theoretical study on the sound absorption of electrolytic solutions. I. Theoretical formulation

    Science.gov (United States)

    Yamaguchi, T.; Matsuoka, T.; Koda, S.

    2007-04-01

    A theory is formulated that describes the sound absorption of electrolytic solutions due to the relative motion of ions, including the formation of ion pairs. The theory is based on the Kubo-Green formula for the bulk viscosity. The time correlation function of the pressure is projected onto the bilinear product of the density modes of ions. The time development of the product of density modes is described by the diffusive limit of the generalized Langevin equation, and approximate expressions for the three- and four-body correlation functions required are given with the hypernetted-chain integral equation theory. Calculations on the aqueous solutions of model electrolytes are performed. It is demonstrated that the theory describes both the activated barrier crossing between contact and solvent-separated ion pairs and the Coulombic correlation between ions.

  3. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  4. Variation of the Korotkoff Stethoscope Sounds During Blood Pressure Measurement: Analysis Using a Convolutional Neural Network.

    Science.gov (United States)

    Pan, Fan; He, Peiyu; Liu, Chengyu; Li, Taiyong; Murray, Alan; Zheng, Dingchang

    2017-11-01

    Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0

  5. Discrimination of musical instrument sounds resynthesized with simplified spectrotemporal parameters.

    Science.gov (United States)

    McAdams, S; Beauchamp, J W; Meneguzzi, S

    1999-02-01

    The perceptual salience of several outstanding features of quasiharmonic, time-variant spectra was investigated in musical instrument sounds. Spectral analyses of sounds from seven musical instruments (clarinet, flute, oboe, trumpet, violin, harpsichord, and marimba) produced time-varying harmonic amplitude and frequency data. Six basic data simplifications and five combinations of them were applied to the reference tones: amplitude-variation smoothing, coherent variation of amplitudes over time, spectral-envelope smoothing, forced harmonic-frequency variation, frequency-variation smoothing, and harmonic-frequency flattening. Listeners were asked to discriminate sounds resynthesized with simplified data from reference sounds resynthesized with the full data. Averaged over the seven instruments, the discrimination was very good for spectral envelope smoothing and amplitude envelope coherence, but was moderate to poor in decreasing order for forced harmonic frequency variation, frequency variation smoothing, frequency flattening, and amplitude variation smoothing. Discrimination of combinations of simplifications was equivalent to that of the most potent constituent simplification. Objective measurements were made on the spectral data for harmonic amplitude, harmonic frequency, and spectral centroid changes resulting from simplifications. These measures were found to correlate well with discrimination results, indicating that listeners have access to a relatively fine-grained sensory representation of musical instrument sounds.

  6. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  7. A three-dimensional integrated nanogenerator for effectively harvesting sound energy from the environment

    Science.gov (United States)

    Liu, Jinmei; Cui, Nuanyang; Gu, Long; Chen, Xiaobo; Bai, Suo; Zheng, Youbin; Hu, Caixia; Qin, Yong

    2016-02-01

    An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a wireless power supply in the electrochemical industry.An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a

  8. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  9. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    Science.gov (United States)

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  10. A CROSS-COUNTRY ANALYSIS OF THE BANKS’ FINANCIAL SOUNDNESS: THE CASE OF THE CEE-3 COUNTRIES

    Directory of Open Access Journals (Sweden)

    Sargu Alina Camelia

    2013-07-01

    Full Text Available The European integration process has a direct impact on all the components of the macroeconomic environment. The existence of a well functioning and sound banking sector becomes of great importance for the integration process as the European Union economy is financed especially through this channel. The banking sectors of the new EU member countries have undergone through tremendous changes in the last decade, both from an ownership and also from a business strategy point of view, these changes having a direct impact on their financial soundness. Thus, the aim of our research is to empirically examine the financial soundness of the banks operating in Bulgaria, Czech Republic and Romania, three EU members countries from Central and Eastern Europe (CEE-3. In order to achieve this we have employed a combine quantitative analysis based on the CAMELS framework (namely Capital Adequacy, Asset quality, Management soundness, Earnings, Liquidity, Sensitivity to market risk and the Z-score, thus being able to underline simultaneously the financial soundness and the possibility of default for the banks from our sample. The analysed period is 2004-2011 providing us with an evaluation of the impact that the EU ascension and also the global financial crisis had on the financial soundness of the analysed banks. Our sample is composed from 40 commercial banks that operate in Bulgaria, the Czech Republic and Romania, that overall own over 75% of the total banking assets, making this study one of the most comprehensive undertaken to this date. The data that we have employed in our research is obtained from the Bureau Van Dijk Bankscope database and the annual financial statements of the banks from our sample. The paper through its original dual approach contributes to the academic debate by providing not only insight into the financial soundness of the banks operating in the CEE-3 countries but also underling their financial strength through the usage of the Z

  11. Recognition and characterization of unstructured environmental sounds

    Science.gov (United States)

    Chu, Selina

    2011-12-01

    be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to

  12. Combined multibeam and bathymetry data from Rhode Island Sound and Block Island Sound: a regional perspective

    Science.gov (United States)

    Poppe, Lawrence J.; McMullen, Katherine Y.; Danforth, William W.; Blankenship, Mark R.; Clos, Andrew R.; Glomb, Kimberly A.; Lewit, Peter G.; Nadeau, Megan A.; Wood, Douglas A.; Parker, Castleton E.

    2014-01-01

    Detailed bathymetric maps of the sea floor in Rhode Island and Block Island Sounds are of great interest to the New York, Rhode Island, and Massachusetts research and management communities because of this area's ecological, recreational, and commercial importance. Geologically interpreted digital terrain models from individual surveys provide important benthic environmental information, yet many applications of this information require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 14 contiguous multibeam bathymetric datasets that were produced by the National Oceanic and Atmospheric Administration during charting operations into one digital terrain model that covers much of Block Island Sound and extends eastward across Rhode Island Sound. The new dataset, which covers over 1244 square kilometers, is adjusted to mean lower low water, gridded to 4-meter resolution, and provided in Universal Transverse Mercator Zone 19, North American Datum of 1983 and geographic World Geodetic Survey of 1984 projections. This resolution is adequate for sea-floor feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the data include boulder lag deposits of winnowed Pleistocene strata, sand-wave fields, and scour depressions that reflect the strength of oscillating tidal currents and scour by storm-induced waves. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic features visible in the data include shipwrecks and dredged channels. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for

  13. Phonemic versus allophonic status modulates early brain responses to language sounds: an MEG/ERF study

    DEFF Research Database (Denmark)

    Nielsen, Andreas Højlund; Gebauer, Line; Mcgregor, William

    allophonic sound contrasts. So far this has only been attested between languages. In the present study we wished to investigate this effect within the same language: Does the same sound contrast that is phonemic in one environment, but allophonic in another, elicit different MMNm responses in native...... ‘that’). This allowed us to manipulate the phonemic/allophonic status of exactly the same sound contrast (/t/-/d/) by presenting it in different immediate phonetic contexts (preceding a vowel (CV) versus following a vowel (VC)), in order to investigate the auditory event-related fields of native Danish...... listeners to a sound contrast that is both phonemic and allophonic within Danish. Methods: Relevant syllables were recorded by a male native Danish speaker. The stimuli were then created by cross-splicing the sounds so that the same vowel [æ] was used for all syllables, and the same [t] and [d] were used...

  14. 12 CFR 30.4 - Filing of safety and soundness compliance plan.

    Science.gov (United States)

    2010-01-01

    ... steps the bank will take to correct the deficiency and the time within which those steps will be taken. (c) Review of safety and soundness compliance plans. Within 30 days after receiving a safety and... AND SOUNDNESS STANDARDS § 30.4 Filing of safety and soundness compliance plan. (a) Schedule for filing...

  15. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  16. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  17. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  18. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  19. The organization of words and environmental sounds in memory☆

    Science.gov (United States)

    Hendrickson, Kristi; Walenski, Matthew; Friend, Margaret; Love, Tracy

    2015-01-01

    In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300–700 ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300–400 ms) and late (500–700 ms) time windows, though a graded pattern similar to that of words was seen in the midlatency time window (400–500 ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds. PMID:25624059

  20. Cascaded Amplitude Modulations in Sound Texture Perception

    Directory of Open Access Journals (Sweden)

    Richard McWalter

    2017-09-01

    Full Text Available Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  1. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  2. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  3. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  4. Numerical investigations on interactions between tangles of quantized vortices and second sound

    International Nuclear Information System (INIS)

    Penz, H.; Aarts, R.; de Waele, F.

    1995-01-01

    The reconnecting vortex-tangle model is used to investigate the interaction of tangles of quantized vortices with second sound. This interaction can be expressed in terms of an effective line-length density, which depends on the direction of the second-sound wave. By comparing the effective line-length densities in various directions the tangle structure can be examined. Simulations were done for flow channels with square and circular cross sections as well as for slits. The results show that in all these cases the tangles are inhomogeneous in direction as well as in space. The calculated inhomogeneities are in agreement with experiment

  5. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  6. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  7. Efficient sound barrier calculations with the BEM

    DEFF Research Database (Denmark)

    Juhl, Peter Møller; Cutanda Henriquez, Vicente

    2018-01-01

    The Boundary Element Method has been used for calculating the effect of introducing sound barriers for some decades. The method has also been used for optimizing the shape of the barrier and in some cases the effects of introducing sound absorption. However, numerical calculations are still quite...... time consuming and inconvenient to use, which is limiting their use for many practical problems. Moreover, measurements are mostly taken in one-third or full octave bands opposed to the numerical computations at specific frequencies, which then has to be conducted using a fine density in frequencies....... This paper addresses some of the challenges and possible solutions for developing BEM into a more efficient tool for sound barrier calculations....

  8. Concepts for evaluation of sound insulation of dwellings - from chaos to consensus?

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  9. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  10. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  11. A cross-language study of the speech sounds in Yorùbá and Malay: Implications for Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Boluwaji Oshodi

    2013-07-01

    Full Text Available Acquiring a language begins with the knowledge of its sounds system which falls under the branch of linguistics known as phonetics. The knowledge of the sound system becomes very important to prospective learners particularly L2 learners whose L1 exhibits different sounds and features from the target L2 because this knowledge is vital in order to internalise the correct pronunciation of words. This study examined and contrasted the sound systems of Yorùbá a Niger-Congo language spoken in Nigeria to that of Malay (Peninsular variety, an Austronesian language spoken in Malaysia with emphasis on the areas of differences. The data for this study were collected from ten participants; five native female Malay speakers who are married to Yorùbá native speakers but live in Malaysia and five Yorùbá native speakers who reside in Nigeria. The findings revealed that speakers from both sides have difficulties with sounds and features in the L2 which are not attested in their L1 and they tended to substitute them for similar ones in their L1 through transfer. This confirms the fact that asymmetry between the sound systems of L1 and L2 is a major source of error in L2 acquisition.

  12. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  13. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  14. Abrupt uplift within the past 1700 years at Southern Puget Sound, Washington

    Science.gov (United States)

    Bucknam, R.C.; Hemphill-Haley, E.; Leopold, E.B.

    1992-01-01

    Shorelines rose as much as 7 meters along southern Puget Sound and Hood Canal between 500 and 1700 years ago. Evidence for this uplift consists of elevated wave-cut shore platforms near Seattle and emerged, peat-covered tidal flats as much as 60 kilometers to the southwest. The uplift was too rapid for waves to leave intermediate shorelines on even the best preserved platform. The tidal flats also emerged abruptly; they changed into freshwater swamps and meadows without first becoming tidal marshes. Where uplift was greatest, it adjoined an inferred fault that crosses Puget Sound at Seattle and it probably accompanied reverse slip on that fault 1000 to 1100 years ago. The uplift and probable fault slip show that the crust of the North America plate contains potential sources of damaging earthquakes in the Puget Sound region.

  15. Towards a Synesthesia Laboratory: Real-time Localization and Visualization of a Sound Source for Virtual Reality Applications

    OpenAIRE

    Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina

    2018-01-01

    In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...

  16. Low frequency sound field control for loudspeakers in rectangular rooms using CABS (Controlled Acoustical Bass System)

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2010-01-01

    Rectangular rooms are the most common shape for sound reproduction, but at low frequencies the reflections from the boundaries of the room cause large spatial variations in the sound pressure level.  Variations up to 30 dB are normal, not only at the room modes, but basically at all frequencies....... As sound propagates in time, it seems natural that the problems can best be analyzed and solved in the time domain. A time based room correction system named CABS (Controlled Acoustical Bass System) has been developed for sound reproduction in rectangular listening rooms. It can control the sound...... sound field in the whole room, and short impulse response.  In a standard listening room (180 m3) only 4 loudspeakers are needed, 2 more than a traditional stereo setup. CABS is controlled by a developed DSP system. The time based approached might help with the understanding of sound field control...

  17. Dissonance: scientific paradigms underpinning the study of sound in geography

    Directory of Open Access Journals (Sweden)

    Daniel Paiva

    2018-05-01

    Full Text Available The objective of this article is to approach the different conceptions of sound – and its relations to the underlying scientific paradigms – that emerged throughout the history of geography. There has been a growing interest among geographers in understanding the spatialities of sound, and geographies of sound have become an emerging subfield of the discipline. For this reason, it is the right time to address how the discipline has approached sound throughout its history. Several theoretical perspectives influenced geography in the twentieth century, changing its methodologies and how its subjects were conceived. Sound, like other subjects, has been conceived very differently by geographers of competing paradigms. Concepts such as noise, soundscape, or sound as affect, among others, have dominated geographies of sound at specific periods. Due to the marginality of the subject in the discipline, assessments of these conceptual shifts are rare. I tackle this issue in this article as I provide a first attempt of writing a history of sound in geography. The article reviews debates regarding the name of the subfield, and the conceptions of sound in the successive and competing scientific paradigms in geography.

  18. Constructions complying with tightened Danish sound insulation requirements for new housing

    OpenAIRE

    Rasmussen, Birgit; Hoffmeyer, Dan

    2010-01-01

    New sound insulation requirements in Denmark in 2008 New Danish Building Regulations with tightened sound insulation requirements were introduced in 2008 (and in 2010 with unchanged acoustic requirements). Compared to the Building Regulations from 1995, the airborne sound insulation requirements were 2 –3 dB stricter and the impact sound insulation requirements 5 dB stricter. The limit values are given using the descriptors R’w and L’n,w as before. For the first time, acoustic requirements fo...

  19. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    Science.gov (United States)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  20. Timing of translation in cross-language qualitative research.

    Science.gov (United States)

    Santos, Hudson P O; Black, Amanda M; Sandelowski, Margarete

    2015-01-01

    Although there is increased understanding of language barriers in cross-language studies, the point at which language transformation processes are applied in research is inconsistently reported, or treated as a minor issue. Differences in translation timeframes raise methodological issues related to the material to be translated, as well as for the process of data analysis and interpretation. In this article we address methodological issues related to the timing of translation from Portuguese to English in two international cross-language collaborative research studies involving researchers from Brazil, Canada, and the United States. One study entailed late-phase translation of a research report, whereas the other study involved early phase translation of interview data. The timing of translation in interaction with the object of translation should be considered, in addition to the language, cultural, subject matter, and methodological competencies of research team members. © The Author(s) 2014.

  1. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  2. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  3. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  4. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  5. Beat-to-beat systolic time-interval measurement from heart sounds and ECG

    International Nuclear Information System (INIS)

    Paiva, R P; Carvalho, P; Couceiro, R; Henriques, J; Antunes, M; Quintal, I; Muehlsteff, J

    2012-01-01

    Systolic time intervals are highly correlated to fundamental cardiac functions. Several studies have shown that these measurements have significant diagnostic and prognostic value in heart failure condition and are adequate for long-term patient follow-up and disease management. In this paper, we investigate the feasibility of using heart sound (HS) to accurately measure the opening and closing moments of the aortic heart valve. These moments are crucial to define the main systolic timings of the heart cycle, i.e. pre-ejection period (PEP) and left ventricular ejection time (LVET). We introduce an algorithm for automatic extraction of PEP and LVET using HS and electrocardiogram. PEP is estimated with a Bayesian approach using the signal's instantaneous amplitude and patient-specific time intervals between atrio-ventricular valve closure and aortic valve opening. As for LVET, since the aortic valve closure corresponds to the start of the S2 HS component, we base LVET estimation on the detection of the S2 onset. A comparative assessment of the main systolic time intervals is performed using synchronous signal acquisitions of the current gold standard in cardiac time-interval measurement, i.e. echocardiography, and HS. The algorithms were evaluated on a healthy population, as well as on a group of subjects with different cardiovascular diseases (CVD). In the healthy group, from a set of 942 heartbeats, the proposed algorithm achieved 7.66 ± 5.92 ms absolute PEP estimation error. For LVET, the absolute estimation error was 11.39 ± 8.98 ms. For the CVD population, 404 beats were used, leading to 11.86 ± 8.30 and 17.51 ± 17.21 ms absolute PEP and LVET errors, respectively. The results achieved in this study suggest that HS can be used to accurately estimate LVET and PEP. (paper)

  6. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    Science.gov (United States)

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  7. Estimating valence from the sound of a word : Computational, experimental, and cross-linguistic evidence

    NARCIS (Netherlands)

    Louwerse, Max; Qu, Zhan

    2017-01-01

    It is assumed linguistic symbols must be grounded in perceptual information to attain meaning, because the sound of a word in a language has an arbitrary relation with its referent. This paper demonstrates that a strong arbitrariness claim should be reconsidered. In a computational study, we showed

  8. Structure and petroleum potential of the continental margin between Cross Sound and Icy Bay, northern Gulf of Alaska

    Science.gov (United States)

    Bruns, T.R.

    1982-01-01

    Major structural features of the Yakutat segment, the segment of the continental margin between Cross Sound and Icy Bay, northern Gulf of Alaska, are delineated by multichannel seismic reflection data. A large structural high is centered on Fairweather Ground and lies generally at the edge of the shelf from Cross Sound to west of the Alsek Valley. A basement uplift, the Dangerous River zone, along which the seismic acoustic basement shallows by up to two kilometers, extends north from the western edge of Fairweather Ground towards the mouth of the Dangerous River. The Dangerous River zone separates the Yakutat segment into two distinct subbasins. The eastern subbasin has a maximum sediment thickness of about 4 km, and the axis of the basin is near and parallel to the coast. Strata in this basin are largely of late Cenozoic age (Neogene and Quaternary) and approximately correlate with the onshore Yakataga Formation. The western subbasin has a maximum of at least 9 km of sediment, comprised of a thick (greater than 4.5 km) Paleogene section overlain by late Cenozoic strata. The Paleogene section is truncated along the Dangerous River zone by a combination of erosion, faulting, and onlap onto the acoustic basement. Within the western subbasin, the late Cenozoic basin axis is near and parallel to the coast, but the Paleogene basin axis appears to trend in a northwest direction diagonally across the shelf. Sedimentary strata throughout the Yakutat shelf show regional subsidence and only minor deformation except in the vicinity of the Fairweather Ground structural high, near and along the Dangerous River zone, and at the shoreline near Lituya Bay. Seismic data across the continental slope and adjacent deep ocean show truncation at the continental slope of Paleogene strata, the presence of a thick (to 6 km) undeformed or mildly deformed abyssal sedimentary section at the base of the slope that in part onlaps the slope, and a relatively narrow zone along the slope or at

  9. Real-time Pedestrian Crossing Recognition for Assistive Outdoor Navigation.

    Science.gov (United States)

    Fontanesi, Simone; Frigerio, Alessandro; Fanucci, Luca; Li, William

    2015-01-01

    Navigation in urban environments can be difficult for people who are blind or visually impaired. In this project, we present a system and algorithms for recognizing pedestrian crossings in outdoor environments. Our goal is to provide navigation cues for crossing the street and reaching an island or sidewalk safely. Using a state-of-the-art Multisense S7S sensor, we collected 3D pointcloud data for real-time detection of pedestrian crossing and generation of directional guidance. We demonstrate improvements to a baseline, monocular-camera-based system by integrating 3D spatial prior information extracted from the pointcloud. Our system's parameters can be set to the actual dimensions of real-world settings, which enables robustness of occlusion and perspective transformation. The system works especially well in non-occlusion situations, and is reasonably accurate under different kind of conditions. As well, our large dataset of pedestrian crossings, organized by different types and situations of pedestrian crossings in order to reflect real-word environments, is publicly available in a commonly used format (ROS bagfiles) for further research.

  10. What is Sensory about Multi-Sensory Enhancement of Vision by Sounds?

    Directory of Open Access Journals (Sweden)

    Alexis Pérez-Bellido

    2011-10-01

    Full Text Available Can auditory input influence the sensory processing of visual information? Many studies have reported cross-modal enhancement in visual tasks, but the nature of such gain is still unclear. Some authors argue for ‘high-order’ expectancy or attention effects, whereas others propose ‘low-order’ stimulus-driven multisensory integration. The present study applies a psychophysical analysis of reaction time distributions in order to disentangle sensory changes from other kind of high-order (not sensory-specific effects. Observers performed a speeded simple detection task on Gabor patches of different spatial frequencies and contrasts, with and without accompanying sounds. The data were adjusted using chronometric functions in order to separate changes is sensory evidence from changes in decision or motor times. The results supported the existence of a stimulus unspecific auditory-induced enhancement in RTs across all types of visual stimuli, probably mediated by higher-order effects (eg, reduction of temporal uncertainty. Critically, we also singled out a sensory gain that was selective to low spatial frequency stimuli, highlighting the role of the magno-cellular visual pathway in multisensory integration for fast detection. The present findings help clarify previous mixed findings in the area, and introduce a novel form to evaluate cross-modal enhancement.

  11. Sound absorption study on acoustic panel from kapok fiber and egg tray

    Science.gov (United States)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  12. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Mapping groundwater reserves in northwestern Cambodia with the combined use of data from lithologs and time-domain-electromagnetic and magnetic-resonance soundings

    Science.gov (United States)

    Valois, Remi; Vouillamoz, Jean-Michel; Lun, Sambo; Arnout, Ludovic

    2018-01-01

    Lack of access to water is the primary constraint to development in rural areas of northwestern Cambodia. Communities lack water for both domestic and irrigation purposes. To provide access to drinking water, governmental and aid agencies have focused on drilling shallow boreholes but they have not had a clear understanding of groundwater potential. The goal of this study has been to improve hydrogeological knowledge of two districts in Oddar Meanchey Province by analyzing borehole lithologs and geophysical data sets. The comparison of 55 time-domain electromagnetic (TEM) soundings and lithologs, as well as 66 magnetic-resonance soundings (MRS) with TEM soundings, allows a better understanding of the links between geology, electrical resistivity and hydrogeological parameters such as the specific yield (S y) derived from MRS. The main findings are that water inflow and S y are more related to electrical resistivity and elevation than to the litholog description. Indeed, conductive media are associated with a null value of S y, whereas resistive rocks at low elevation are always linked to strictly positive S y. A new methodology was developed to create maps of groundwater reserves based on 612 TEM soundings and the observed relationship between resistivity and S y. TEM soundings were inverted using a quasi-3D modeling approach called `spatially constrained inversion'. Such maps will, no doubt, be very useful for borehole siting and in the economic development of the province because they clearly distinguish areas of high groundwater-reserves potential from areas that lack reserves.

  14. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  15. Stationary echo canceling in velocity estimation by time-domain cross-correlation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1993-01-01

    The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated...

  16. Generation of sound zones in 2.5 dimensions

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Olsen, Martin; Møller, Martin

    2011-01-01

    in a certain direction within a certain region of a room and at the same time suppress sound in another region. The method is examined through simulations and experiments. For comparison a simpler method based on the idea of maximising the ratio of the potential acoustic energy in an ensonified zone......Amethod for generating sound zones with different acoustic properties in a room is presented. The method is an extension of the two-dimensional multi-zone sound field synthesis technique recently developed by Wu and Abhayapala; the goal is, for example, to generate a plane wave that propagates...... to the potential acoustic energy in a quiet zone is also examined....

  17. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  18. Visualization of the hot chocolate sound effect by spectrograms

    Science.gov (United States)

    Trávníček, Z.; Fedorchenko, A. I.; Pavelka, M.; Hrubý, J.

    2012-12-01

    We present an experimental and a theoretical analysis of the hot chocolate effect. The sound effect is evaluated using time-frequency signal processing, resulting in a quantitative visualization by spectrograms. This method allows us to capture the whole phenomenon, namely to quantify the dynamics of the rising pitch. A general form of the time dependence volume fraction of the bubbles is proposed. We show that the effect occurs due to the nonlinear dependence of the speed of sound in the gas/liquid mixture on the volume fraction of the bubbles and the nonlinear time dependence of the volume fraction of the bubbles.

  19. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  20. Aerodynamic sound of flow past an airfoil

    Science.gov (United States)

    Wang, Meng

    1995-01-01

    The long term objective of this project is to develop a computational method for predicting the noise of turbulence-airfoil interactions, particularly at the trailing edge. We seek to obtain the energy-containing features of the turbulent boundary layers and the near-wake using Navier-Stokes Simulation (LES or DNS), and then to calculate the far-field acoustic characteristics by means of acoustic analogy theories, using the simulation data as acoustic source functions. Two distinct types of noise can be emitted from airfoil trailing edges. The first, a tonal or narrowband sound caused by vortex shedding, is normally associated with blunt trailing edges, high angles of attack, or laminar flow airfoils. The second source is of broadband nature arising from the aeroacoustic scattering of turbulent eddies by the trailing edge. Due to its importance to airframe noise, rotor and propeller noise, etc., trailing edge noise has been the subject of extensive theoretical (e.g. Crighton & Leppington 1971; Howe 1978) as well as experimental investigations (e.g. Brooks & Hodgson 1981; Blake & Gershfeld 1988). A number of challenges exist concerning acoustic analogy based noise computations. These include the elimination of spurious sound caused by vortices crossing permeable computational boundaries in the wake, the treatment of noncompact source regions, and the accurate description of wave reflection by the solid surface and scattering near the edge. In addition, accurate turbulence statistics in the flow field are required for the evaluation of acoustic source functions. Major efforts to date have been focused on the first two challenges. To this end, a paradigm problem of laminar vortex shedding, generated by a two dimensional, uniform stream past a NACA0012 airfoil, is used to address the relevant numerical issues. Under the low Mach number approximation, the near-field flow quantities are obtained by solving the incompressible Navier-Stokes equations numerically at chord

  1. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  2. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  3. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  4. Effect of nocturnal sound reduction on the incidence of delirium in intensive care unit patients: An interrupted time series analysis.

    Science.gov (United States)

    van de Pol, Ineke; van Iterson, Mat; Maaskant, Jolanda

    2017-08-01

    Delirium in critically-ill patients is a common multifactorial disorder that is associated with various negative outcomes. It is assumed that sleep disturbances can result in an increased risk of delirium. This study hypothesized that implementing a protocol that reduces overall nocturnal sound levels improves quality of sleep and reduces the incidence of delirium in Intensive Care Unit (ICU) patients. This interrupted time series study was performed in an adult mixed medical and surgical 24-bed ICU. A pre-intervention group of 211 patients was compared with a post-intervention group of 210 patients after implementation of a nocturnal sound-reduction protocol. Primary outcome measures were incidence of delirium, measured by the Intensive Care Delirium Screening Checklist (ICDSC) and quality of sleep, measured by the Richards-Campbell Sleep Questionnaire (RCSQ). Secondary outcome measures were use of sleep-inducing medication, delirium treatment medication, and patient-perceived nocturnal noise. A significant difference in slope in the percentage of delirium was observed between the pre- and post-intervention periods (-3.7% per time period, p=0.02). Quality of sleep was unaffected (0.3 per time period, p=0.85). The post-intervention group used significantly less sleep-inducing medication (psound-reduction protocol. However, reported sleep quality did not improve. Copyright © 2017. Published by Elsevier Ltd.

  5. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  6. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  7. Sound localization in the presence of one or two distracters

    NARCIS (Netherlands)

    Langendijk, E.H.A.; Kistler, D.J.; Wightman, F.L

    2001-01-01

    Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were

  8. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  9. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  10. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  11. Sound classification of dwellings in the Nordic countries

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Turunen-Rise, Iiris

    1997-01-01

    be met. The classification system is based on limit values for airborne sound insulation, impact sound pressure level, reverberation time and indoor and outdoor noise levels. The purpose of the standard is to offer a tool for specification of a standardised acoustic climate and to promote constructors......A draft standard INSTA 122:1997 on sound classification of dwellings is for voting as a common national standard in the Nordic countries (Denmark, Norway, Sweden, Finland, Iceland) and in Estonia. The draft standard specifies a sound classification system with four classes A, B, C and D, where...... class C is proposed as the future minimum requirements for new dwellings. The classes B and A define criteria for dwellings with improved or very good acoustic conditions, whereas class D may be used for older, renovated dwellings in which the acoustic quality level of a new dwelling cannot reasonably...

  12. Electromagnetic Sampo monitoring soundings at Olkiluoto 2010

    International Nuclear Information System (INIS)

    Korhonen, K.; Korpisalo, A.; Ojamo, H.

    2010-12-01

    The Geological Survey of Finland has carried out electromagnetic frequency-domain depth soundings at fixed measurement stations in Olkiluoto annually since 2004. The purpose of the soundings is to monitor the groundwater conditions in the vicinity of the ONKALO rock characterization facility which will ultimately be part of the final nuclear waste disposal facility for the Finnish nuclear power companies. A new monitoring survey was carried out at the turn of May-June 2010. The survey resulted in 38 successfully performed soundings at 10 stations. The data set spanning the time period of 2004 to 2010 was interpreted with layered-earth models. Most of the interpretations indicate no systematic changes in the level of deep saline groundwater. However, at one station there are indications of a systematic rise in the groundwater level. (orig.)

  13. Effects of capacity limits, memory loss, and sound type in change deafness.

    Science.gov (United States)

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  14. Efficient techniques for wave-based sound propagation in interactive applications

    Science.gov (United States)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data

  15. Cross-Term Suppression in Time Order Distribution for AWGN Signal

    Directory of Open Access Journals (Sweden)

    WAQAS MAHMOOD

    2017-04-01

    Full Text Available A technique of cross-term suppression in WD (Wigner Distribution for a multi-component signal that is embedded WGN (White Gaussian Noise is proposed. In this technique, an optimized algorithm is developed for time-varying noisy signal and a CAD (Computer Aided Design simulator is designed for Numerical simulations of synthetic signal. In proposed technique, signal components are localized in tf (time frequency plane by STFT (Short Time Fourier Transform. Rectified STFT is computed and Spectral Kurtosis is used to separate a signal components from noise in t-f plane. The t-f plane is segmented and then signal components are filtered out by FFT (Fractional Fourier Transform. Finally, WD (free of cross terms of isolated signal component is computed to obtain high resolution in t-f plane.

  16. a New Approach to Physiologic Triggering in Medical Imaging Using Multiple Heart Sounds Alone.

    Science.gov (United States)

    Groch, Mark Walter

    A new method for physiological synchronization of medical image acquisition using both the first and second heart sound has been developed. Heart sounds gating (HSG) circuitry has been developed which identifies, individually, both the first (S1) and second (S2) heart sounds from their timing relationship alone, and provides two synchronization points during the cardiac cycle. Identification of first and second heart sounds from their timing relationship alone and application to medical imaging has, heretofore, not been performed in radiology or nuclear medicine. The heart sounds are obtained as conditioned analog signals from a piezoelectric transducer microphone placed on the patient's chest. The timing relationships between the S1 to S2 pulses and the S2 to S1 pulses are determined using a logic scheme capable of distinguishing the S1 and S2 pulses from the heart sounds themselves, using their timing relationships, and the assumption that initially the S1-S2 interval will be shorter than the S2-S1 interval. Digital logic circuitry is utilized to continually track the timing intervals and extend the S1/S2 identification to heart rates up to 200 beats per minute (where the S1-S2 interval is not shorter than the S2-S1 interval). Clinically, first heart sound gating may be performed to assess the systolic ejection portion of the cardiac cycle, with S2 gating utilized for reproduction of the diastolic filling portion of the cycle. One application of HSG used for physiologic synchronization is in multigated blood pool (MGBP) imaging in nuclear medicine. Heart sounds gating has been applied to twenty patients who underwent analysis of ventricular function in Nuclear Medicine, and compared to conventional ECG gated MGBP. Left ventricular ejection fractions calculated from MGBP studies using a S1 and a S2 heart sound trigger correlated well with conventional ECG gated acquisitions in patients adequately gated by HSG and ECG. Heart sounds gating provided superior

  17. Mars SubsurfAce Sounding by Time-Domain Electromagnetic MeasuRements

    Science.gov (United States)

    Tacconi, G.; Minna, L.; Pagnan, S.; Tacconi, M.

    1999-09-01

    MASTER (Mars subsurfAce Sounding by Time-domain Electromagnetic measuRements) is an experimental project proposed to fly aboard the Italian Drill (DEEDRI) payload for the Mars Surveyor Program 2003. MASTER will offer the scientific community the first opportunity to scan Mars subsurface structure by means of the technique employing time-domain electromagnetic measurements TDEM. Up today proposed experiments for scanning the Martian subsurface have focused on exploring the crust of the planet Mars up to few meters, while MASTER will explore electrical structures and related soil characteristics and processes at depths up to hundreds meters at least. TDEM represents an active remote sensing system and will be used likely a ULF/ELF/VLF ``radar." If a certain volumetric zone has different electrical conductivity, the current in the sample will vary generating a secondary scattered electromagnetic field containing the information about the explored volume. The volumetric mean value of the conductivity will be estimated according to the implicit near field e.m. propagation conditions, considering the skin depth (d) and the apparent resistivity (ra) as the most representative and critical parameters. As any active remotely sensed measurements the TDEM system behaves like a ``bistatic" communication channel and is mandatory to investigate the characteristics of the background noise at the receiver site. The MASTER system, can operate also as a passive listening device of the possible electromagnetic background noise on the Mars surface at ULF/ELF/VLF bands. Present paper will describe in details the application of the TDEM method as well as the approaches to the detection and estimation of the e.m. BGN on Mars surface, in terms of man made, natural BGN and intrinsic noise of the sensors and electronic systems. The electromagnetic background noise detection/estimation represents by itself a no cost experiment and the first experiment of this type on Mars.

  18. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  19. The sound of friction: Real-time models, playability and musical applications

    Science.gov (United States)

    Serafin, Stefania

    Friction, the tangential force between objects in contact, in most engineering applications needs to be removed as a source of noise and instabilities. In musical applications, friction is a desirable component, being the sound production mechanism of different musical instruments such as bowed strings, musical saws, rubbed bowls and any other sonority produced by interactions between rubbed dry surfaces. The goal of the dissertation is to simulate different instrument whose main excitation mechanism is friction. An efficient yet accurate model of a bowed string instrument, which combines the latest results in violin acoustics with the efficient digital waveguide approach, is provided. In particular, the bowed string physical model proposed uses a thermodynamic friction model in which the finite width of the bow is taken into account; this solution is compared to the recently developed elasto-plastic friction models used in haptics and robotics. Different solutions are also proposed to model the body of the instrument. Other less common instruments driven by friction are also proposed, and the elasto-plastic model is used to provide audio-visual simulations of everyday friction sounds such as squeaking doors and rubbed wine glasses. Finally, playability evaluations and musical applications in which the models have been used are discussed.

  20. Autocorrelation and cross-correlation in time series of homicide and attempted homicide

    Science.gov (United States)

    Machado Filho, A.; da Silva, M. F.; Zebende, G. F.

    2014-04-01

    We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.

  1. Temporal Organization of Sound Information in Auditory Memory

    OpenAIRE

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed ...

  2. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  3. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  4. Sound Symbolism in Infancy: Evidence for Sound-Shape Cross-Modal Correspondences in 4-Month-Olds

    Science.gov (United States)

    Ozturk, Ozge; Krehm, Madelaine; Vouloumanos, Athena

    2013-01-01

    Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., "kiki") to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings…

  5. Physically based sound synthesis and control of jumping sounds on an elastic trampoline

    DEFF Research Database (Denmark)

    Turchet, Luca; Pugliese, Roberto; Takala, Tapio

    2013-01-01

    This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine was contr......This paper describes a system to interactively sonify the foot-floor contacts resulting from jumping on an elastic trampoline. The sonification was achieved by means of a synthesis engine based on physical models reproducing the sounds of jumping on several surface materials. The engine...... was controlled in real-time by pro- cessing the signal captured by a contact microphone which was attached to the membrane of the trampoline in order to detect each jump. A user study was conducted to evaluate the quality of the in- teractive sonification. Results proved the success of the proposed algorithms...

  6. Metrics for Polyphonic Sound Event Detection

    Directory of Open Access Journals (Sweden)

    Annamaria Mesaros

    2016-05-01

    Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

  7. Effects of interaural level differences on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency-dependent shaping of binaural cues such as interaural level...... differences (ILDs) and interaural time differences (ITDs). In rooms, the sound reaching the two ears is further modified by reverberant energy, which leads to increased fluctuations in short-term ILDs and ITDs. In the present study, the effect of ILD fluctuations on the externalization of sound......, for sounds that contain frequencies above about 1 kHz the ILD fluctuations were found to be an essential cue for externalization....

  8. Study of water flowrate using time transient and cross-correlation techniques with 82Br radiotracer

    International Nuclear Information System (INIS)

    Salgado, William L.; Brandao, Luiz E.B.

    2013-01-01

    This paper aims to determinate the water flowrate using Time Transient and Cross-Correlation techniques. The detection system uses two NaI (T1) detectors adequately positioned on the outside of pipe and a gamma-ray source ( 82 Br radiotracer). The water flowrate measurements using Time Transient and Cross-Correlation techniques were compared to invasive conventional measurements of the flowrate previously installed in pipeline. Discrepancies between Time Transient and Cross-Correlation techniques flowmeter previously installed in pipeline. Discrepancies between Time Transient and Cross-Correlation techniques flowrate values were found to be less than 3% in relation to conventional ones. (author)

  9. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  10. A data-assimilative ocean forecasting system for the Prince William sound and an evaluation of its performance during sound Predictions 2009

    Science.gov (United States)

    Farrara, John D.; Chao, Yi; Li, Zhijin; Wang, Xiaochun; Jin, Xin; Zhang, Hongchun; Li, Peggy; Vu, Quoc; Olsson, Peter Q.; Schoch, G. Carl; Halverson, Mark; Moline, Mark A.; Ohlmann, Carter; Johnson, Mark; McWilliams, James C.; Colas, Francois A.

    2013-07-01

    The development and implementation of a three-dimensional ocean modeling system for the Prince William Sound (PWS) is described. The system consists of a regional ocean model component (ROMS) forced by output from a regional atmospheric model component (the Weather Research and Forecasting Model, WRF). The ROMS ocean model component has a horizontal resolution of 1km within PWS and utilizes a recently-developed multi-scale 3DVAR data assimilation methodology along with freshwater runoff from land obtained via real-time execution of a digital elevation model. During the Sound Predictions Field Experiment (July 19-August 3, 2009) the system was run in real-time to support operations and incorporated all available real-time streams of data. Nowcasts were produced every 6h and a 48-h forecast was performed once a day. In addition, a sixteen-member ensemble of forecasts was executed on most days. All results were published at a web portal (http://ourocean.jpl.nasa.gov/PWS) in real time to support decision making.The performance of the system during Sound Predictions 2009 is evaluated. The ROMS results are first compared with the assimilated data as a consistency check. RMS differences of about 0.7°C were found between the ROMS temperatures and the observed vertical profiles of temperature that are assimilated. The ROMS salinities show greater discrepancies, tending to be too salty near the surface. The overall circulation patterns observed throughout the Sound are qualitatively reproduced, including the following evolution in time. During the first week of the experiment, the weather was quite stormy with strong southeasterly winds. This resulted in strong north to northwestward surface flow in much of the central PWS. Both the observed drifter trajectories and the ROMS nowcasts showed strong surface inflow into the Sound through the Hinchinbrook Entrance and strong generally northward to northwestward flow in the central Sound that was exiting through the Knight

  11. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  12. Combined multibeam and LIDAR bathymetry data from eastern Long Island Sound and westernmost Block Island Sound-A regional perspective

    Science.gov (United States)

    Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Doran, E.F.

    2011-01-01

    Detailed bathymetric maps of the sea floor in Long Island Sound are of great interest to the Connecticut and New York research and management communities because of this estuary's ecological, recreational, and commercial importance. The completed, geologically interpreted digital terrain models (DTMs), ranging in area from 12 to 293 square kilometers, provide important benthic environmental information, yet many applications require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 12 multibeam and 2 LIDAR (Light Detection and Ranging) contiguous bathymetric DTMs, produced by the National Oceanic and Atmospheric Administration during charting operations, into one dataset that covers much of eastern Long Island Sound and extends into westernmost Block Island Sound. The new dataset is adjusted to mean lower low water, is gridded to 4-meter resolution, and is provided in UTM Zone 18 NAD83 and geographic WGS84 projections. This resolution is adequate for sea floor-feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the grid include exposed bedrock outcrops, boulder lag deposits of submerged moraines, sand-wave fields, and scour depressions that reflect the strength of the oscillating and asymmetric tidal currents. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic artifacts visible in the bathymetric data include a dredged channel, shipwrecks, dredge spoils, mooring anchors, prop-scour depressions, buried cables, and bridge footings. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental

  13. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  14. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  15. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    Science.gov (United States)

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  16. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  17. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  18. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  19. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  20. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  1. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  2. Crossing rule for a PT-symmetric two-level time-periodic system

    International Nuclear Information System (INIS)

    Moiseyev, Nimrod

    2011-01-01

    For a two-level system in a time-periodic field we show that in the non-Hermitian PT case the level crossing is of two quasistationary states that have the same dynamical symmetry property. At the field's parameters where the two levels which have the same dynamical symmetry cross, the corresponding quasienergy states coalesce and a self-orthogonal state is obtained. This situation is very different from the Hermitian case where a crossing of two quasienergy levels happens only when the corresponding two quasistationary states have different dynamical symmetry properties and, unlike the situation in the non-Hermitian case, the spectrum remains complete also when the two levels cross.

  3. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  4. Multiple target sound quality balance for hybrid electric powertrain noise

    Science.gov (United States)

    Mosquera-Sánchez, J. A.; Sarrazin, M.; Janssens, K.; de Oliveira, L. P. R.; Desmet, W.

    2018-01-01

    The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the wider bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. This paper presents a framework for enhancing the sound quality (SQ) of the hybrid electric powertrain noise perceived inside the passenger compartment. Compared with current active sound quality control (ASQC) schemes, where the SQ improvement is just an effect of the control actions, the proposed technique features an optimization stage, which enables the NVH specialist to actively implement the amplitude balance of the tones that better fits into the auditory expectations. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent optimization analysis, which, eventually, drives the control design method. Thus, multichannel active sound profiling systems that feature cross-channel compensation schemes are guided by the multi-objective optimization stage, by means of optimal sets of amplitude gain factors that can be implemented at each single sensor location, while minimizing cross-channel effects that can either degrade the original SQ condition, or even hinder the implementation of independent SQ targets. The proposed framework is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ enhancement for multiple locations within a scaled vehicle mock-up. The results show total success rates in excess of 90%, which indicate that the proposed method is promising, not only for the improvement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.

  5. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  6. Delimbing and Cross-cutting of Coniferous Trees–Time Consumption, Work Productivity and Performance

    Directory of Open Access Journals (Sweden)

    Arcadie Ciubotaru

    2018-04-01

    Full Text Available This research established the time consumption, work time structure, and productivity for primary processing in felling areas of coniferous trees felled with a chainsaw. Delimbing and partial cross-cutting were taken into consideration. The research was conducted in a mixed spruce and fir tree stand situated in the Carpathian Mountains. The team of workers consisted of a chainsaw operator and assistant with over 10 years of experience. The results indicated a total time of 536.32 s·m−3 (1145.26 s·tree−1, work performance (including delays of 6.716 m3·h−1 (3.14 tree·h−1, and work productivity (without delays of 35.459 m3·h−1 (16.58 tree·h−1. The chainsaw productivity during tree cross-cutting was 82.29 cm2·s−1. Delimbing accounted for 96.18% of the real work time, while cross-cutting accounted for 3.82%. The time consumption for delimbing and cross-cutting, as well as the work productivity and performance in the primary processing of coniferous trees in the felling area, were influenced by the breast height diameter, stem length, and tree volume, while the chainsaw productivity was influenced by the diameter of the cross-cut sections. The relationships between the aforementioned dependent and independent variables were determined by simple and linear multiple regression equations.

  7. An analysis of collegiate band directors' exposure to sound pressure levels

    Science.gov (United States)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  8. Ultrasound sounding in air by fast-moving receiver

    Science.gov (United States)

    Sukhanov, D.; Erzakova, N.

    2018-05-01

    A method of ultrasound imaging in the air for a fast receiver. The case, when the speed of movement of the receiver can not be neglected with respect to the speed of sound. In this case, the Doppler effect is significant, making it difficult for matched filtering of the backscattered signal. The proposed method does not use a continuous repetitive noise-sounding signal. generalized approach applies spatial matched filtering in the time domain to recover the ultrasonic tomographic images.

  9. When Distance Matters: Perceptual Bias and Behavioral Response for Approaching Sounds in Peripersonal and Extrapersonal Space

    NARCIS (Netherlands)

    Camponogara, I.; Komeilipoor, N.; Cesari, P.

    2015-01-01

    Studies on sound perception show a tendency to overestimate the distance of an approaching sound source, leading to a faster reaction time compared to a receding sound source. Nevertheless, it is unclear whether motor preparation and execution change according to the perceived sound direction and

  10. New developments in the surveillance and diagnostics technology for vibration, structure-borne sound and leakage monitoring systems

    International Nuclear Information System (INIS)

    Gloth, Gerrit

    2009-01-01

    Monitoring and diagnostic systems are of main importance for a safe and efficient operation of nuclear power plants. The author describes new developments with respect to vibration monitoring with a functional extension in the time domain for den secondary circuit, the development of a local system for the surveillance of rotating machines, the structure-borne sound monitoring with improvement of event analysis, esp. the loose part locating, leakage monitoring with a complete system for humidity measurement, and the development of a common platform for all monitoring and diagnostic systems, that allows an efficient access for comparison and cross references.

  11. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  12. Correspondence between sound propagation in discrete and continuous random media with application to forest acoustics.

    Science.gov (United States)

    Ostashev, Vladimir E; Wilson, D Keith; Muhlestein, Michael B; Attenborough, Keith

    2018-02-01

    Although sound propagation in a forest is important in several applications, there are currently no rigorous yet computationally tractable prediction methods. Due to the complexity of sound scattering in a forest, it is natural to formulate the problem stochastically. In this paper, it is demonstrated that the equations for the statistical moments of the sound field propagating in a forest have the same form as those for sound propagation in a turbulent atmosphere if the scattering properties of the two media are expressed in terms of the differential scattering and total cross sections. Using the existing theories for sound propagation in a turbulent atmosphere, this analogy enables the derivation of several results for predicting forest acoustics. In particular, the second-moment parabolic equation is formulated for the spatial correlation function of the sound field propagating above an impedance ground in a forest with micrometeorology. Effective numerical techniques for solving this equation have been developed in atmospheric acoustics. In another example, formulas are obtained that describe the effect of a forest on the interference between the direct and ground-reflected waves. The formulated correspondence between wave propagation in discrete and continuous random media can also be used in other fields of physics.

  13. The generation of sound by vorticity waves in swirling duct flows

    Science.gov (United States)

    Howe, M. S.; Liu, J. T. C.

    1977-01-01

    Swirling flow in an axisymmetric duct can support vorticity waves propagating parallel to the axis of the duct. When the cross-sectional area of the duct changes a portion of the wave energy is scattered into secondary vorticity and sound waves. Thus the swirling flow in the jet pipe of an aeroengine provides a mechanism whereby disturbances produced by unsteady combustion or turbine blading can be propagated along the pipe and subsequently scattered into aerodynamic sound. In this paper a linearized model of this process is examined for low Mach number swirling flow in a duct of infinite extent. It is shown that the amplitude of the scattered acoustic pressure waves is proportional to the product of the characteristic swirl velocity and the perturbation velocity of the vorticity wave. The sound produced in this way may therefore be of more significance than that generated by vorticity fluctuations in the absence of swirl, for which the acoustic pressure is proportional to the square of the perturbation velocity. The results of the analysis are discussed in relation to the problem of excess jet noise.

  14. Thinking soap But Speaking ‘oaps’. The Sound Preparation Period: Backward Calculation From Utterance to Muscle Innervation

    Directory of Open Access Journals (Sweden)

    Nora Wiedenmann

    2010-04-01

    Full Text Available

    In this article’s model—on speech and on speech errors, dyscoordinations, and disorders—, the time-course from the muscle innervation impetuses to the utterance of sounds as intended for canonical speech sound sequences is calculated backward. This time-course is shown as the sum of all the known physiological durations of speech sounds and speech gestures that are necessary to produce an utterance. The model introduces two internal clocks, based on positive or negative factors, representing certain physiologically-based time-courses during the sound preparation period (Lautvorspann. The use of these internal clocks show that speech gestures—like other motor activities—work according to a simple serialization principle: Under non-default conditions,
    alterations of the time-courses may cause speech errors of sound serialization, dyscoordinations of sounds as observed during first language acquisition, or speech disorders as pathological cases. These alterations of the time-course are modelled by varying the two internal-clock factors. The calculation of time-courses uses as default values the sound durations of the context-dependent Munich PHONDAT Database of Spoken German (see Appendix 4. As a new, human approach, this calculation agrees mathematically with the approach of Linear Programming / Operations Research. This work gives strong support to the fairly old suspicion (of 1908 of the famous Austrian speech error scientist Meringer [15], namely that one mostly thinks and articulates in a different serialization than is audible from one’s uttered sound sequences.

  15. Multifractal detrended cross-correlation analysis on gold, crude oil and foreign exchange rate time series

    Science.gov (United States)

    Pal, Mayukha; Madhusudana Rao, P.; Manimaran, P.

    2014-12-01

    We apply the recently developed multifractal detrended cross-correlation analysis method to investigate the cross-correlation behavior and fractal nature between two non-stationary time series. We analyze the daily return price of gold, West Texas Intermediate and Brent crude oil, foreign exchange rate data, over a period of 18 years. The cross correlation has been measured from the Hurst scaling exponents and the singularity spectrum quantitatively. From the results, the existence of multifractal cross-correlation between all of these time series is found. We also found that the cross correlation between gold and oil prices possess uncorrelated behavior and the remaining bivariate time series possess persistent behavior. It was observed for five bivariate series that the cross-correlation exponents are less than the calculated average generalized Hurst exponents (GHE) for q0 and for one bivariate series the cross-correlation exponent is greater than GHE for all q values.

  16. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  17. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  18. Classical pooling of cross-section and time series data

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    2000-04-01

    This paper discusses the classical pooling of cross-section and time series data. The re-expressions of the normal equations of this model are given to indicate the source of the paradox that arises in the estimation of the regression coefficient. (author)

  19. Zero cross over timing with coaxial Ge(Li) detectors

    International Nuclear Information System (INIS)

    El-Ibiary, M.Y.

    1979-07-01

    The performance of zero cross over timing systems of the constant fraction or amplitude rise time compensated type using coaxial Ge(Li) detectors is analyzed with special attention to conditions that compromise their energy-independence advantage. The outcome is verified against existing experimental results, and the parameters that lead to minimum disperson, as well as the value of the dispersion to be expected, are given by a series of charts

  20. Numerical Analysis of Indoor Sound Quality Evaluation Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-Tuan Chou

    2013-01-01

    Full Text Available Indoors sound field distribution is important to Room Acoustics, but the field suffers numerous problems, for example, multipath propagation and scattering owing to sound absorption by furniture and other aspects of décor. Generally, an ideal interior space must have a sound field with clear quality. This provides both the speaker and the listener with a pleasant conversational environment. This investigation uses the Finite Element Method to assess the acoustic distribution based on the indoor space and chamber volume. In this situation, a fixed sound source at different frequencies is used to simulate the acoustic characteristics of the indoor space. This method considers the furniture and decoration sound absorbing material and thus different sound absorption coefficients and configurations. The preliminary numerical simulation provides a method that can forecast the distribution of sound in an indoor room in complex situations. Consequently, it is possible to arrange interior furnishings and appliances to optimize acoustic distribution and environmental friendliness. Additionally, the analytical results can also be used to calculate the Reverberation Time and speech intelligibility for specified indoor space.

  1. Active low frequency sound field control in a listening room using CABS (Controlled Acoustic Bass System) will also reduce the sound transmitted to neighbour rooms

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2012-01-01

    Sound in rooms and transmission of sound between rooms gives the biggest problems at low frequencies. Rooms with rectangular boundaries have strong resonance frequencies and will give big spatial variations in sound pressure level (SPL) in the source room, and an increase in SPL of 20 dB at a wall...... Bass System) is a time based room correction system for reproduced sound using loudspeakers. The system can remove room modes at low frequencies, by active cancelling the reflection from at the rear wall to a normal stereo setup. Measurements in a source room using CABS and in two neighbour rooms have...... shown a reduction in sound transmission of up to 10 dB at resonance frequencies and a reduction at broadband noise of 3 – 5 dB at frequencies up to 100 Hz. The ideas and understanding of the CABS system will also be given....

  2. Sound insulation between dwellings - Descriptors applied in building regulations in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2010-01-01

    Regulatory sound insulation requirements for dwellings have existed since the 1950s in some countries and descriptors for evaluation of sound insulation have existed for nearly as long. However, the descriptors have changed considerably over time, from simple arithmetic averaging of frequency bands...... was carried out of legal sound insulation requirements in 24 countries in Europe. The comparison of requirements for sound insulation between dwellings revealed significant differences in descriptors as well as levels. This paper focuses on descriptors and summarizes the history of descriptors, the problems...... of the present situation and the benefits of consensus concerning descriptors for airborne and impact sound insulation between dwellings. The descriptors suitable for evaluation should be well-defined under practical situations in buildings and be measurable. Measurement results should be reproducible...

  3. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  4. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  5. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  6. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  7. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  8. Time-varying and time-invariant dimensions of depression in children and adolescents: Implications for cross-informant agreement.

    Science.gov (United States)

    Cole, David A; Martin, Joan M; Jacquez, Farrah M; Tram, Jane M; Zelkowitz, Rachel; Nick, Elizabeth A; Rights, Jason D

    2017-07-01

    The longitudinal structure of depression in children and adolescents was examined by applying a Trait-State-Occasion structural equation model to 4 waves of self, teacher, peer, and parent reports in 2 age groups (9 to 13 and 13 to 16 years old). Analyses revealed that the depression latent variable consisted of 2 longitudinal factors: a time-invariant dimension that was completely stable over time and a time-varying dimension that was not perfectly stable over time. Different sources of information were differentially sensitive to these 2 dimensions. Among adolescents, self- and parent reports better reflected the time-invariant aspects. For children and adolescents, peer and teacher reports better reflected the time-varying aspects. Relatively high cross-informant agreement emerged for the time-invariant dimension in both children and adolescents. Cross-informant agreement for the time-varying dimension was high for adolescents but very low for children. Implications emerge for theoretical models of depression and for its measurement, especially when attempting to predict changes in depression in the context of longitudinal studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  10. Cross-training workers in dual resource constrained systems with heterogeneous processing times

    NARCIS (Netherlands)

    Bokhorst, J. A. C.; Gaalman, G. J. C.

    2009-01-01

    In this paper, we explore the effect of cross-training workers in Dual Resource Constrained (DRC) systems with machines having different mean processing times. By means of queuing and simulation analysis, we show that the detrimental effects of pooling (cross-training) previously found in single

  11. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  12. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  13. Classification of lung sounds using higher-order statistics: A divide-and-conquer approach.

    Science.gov (United States)

    Naves, Raphael; Barbosa, Bruno H G; Ferreira, Danton D

    2016-06-01

    Lung sound auscultation is one of the most commonly used methods to evaluate respiratory diseases. However, the effectiveness of this method depends on the physician's training. If the physician does not have the proper training, he/she will be unable to distinguish between normal and abnormal sounds generated by the human body. Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds. We used a dataset composed of five types of lung sounds: normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS) to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA) and Fisher's Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey's Honestly Significant Difference criterion to compare the results. Our results showed that the Genetic Algorithms outperformed the Fisher's Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Sound propagation in narrow tubes including effects of viscothermal and turbulent damping with application to charge air coolers

    Science.gov (United States)

    Knutsson, Magnus; Åbom, Mats

    2009-02-01

    Charge air coolers (CACs) are used on turbocharged internal combustion engines to enhance the overall gas-exchange performance. The cooling of the charged air results in higher density and thus volumetric efficiency. It is also important for petrol engines that the knock margin increases with reduced charge air temperature. A property that is still not very well investigated is the sound transmission through a CAC. The losses, due to viscous and thermal boundary layers as well as turbulence, in the narrow cooling tubes result in frequency dependent attenuation of the transmitted sound that is significant and dependent on the flow conditions. Normally, the cross-sections of the cooling tubes are neither circular nor rectangular, which is why no analytical solution accounting for a superimposed mean flow exists. The cross-dimensions of the connecting tanks, located on each side of the cooling tubes, are large compared to the diameters of the inlet and outlet ducts. Three-dimensional effects will therefore be important at frequencies significantly lower than the cut-on frequencies of the inlet/outlet ducts. In this study the two-dimensional finite element solution scheme for sound propagation in narrow tubes, including the effect of viscous and thermal boundary layers, originally derived by Astley and Cummings [Wave propagation in catalytic converters: Formulation of the problem and finite element scheme, Journal of Sound and Vibration 188 (5) (1995) 635-657] is used to extract two-ports to represent the cooling tubes. The approximate solutions for sound propagation, accounting for viscothermal and turbulent boundary layers derived by Dokumaci [Sound transmission in narrow pipes with superimposed uniform mean flow and acoustic modelling of automobile catalytic converters, Journal of Sound and Vibration 182 (5) (1995) 799-808] and Howe [The damping of sound by wall turbulent shear layers, Journal of the Acoustical Society of America 98 (3) (1995) 1723-1730], are

  15. Detecting PM2.5's Correlations between Neighboring Cities Using a Time-Lagged Cross-Correlation Coefficient.

    Science.gov (United States)

    Wang, Fang; Wang, Lin; Chen, Yuming

    2017-08-31

    In order to investigate the time-dependent cross-correlations of fine particulate (PM2.5) series among neighboring cities in Northern China, in this paper, we propose a new cross-correlation coefficient, the time-lagged q-L dependent height crosscorrelation coefficient (denoted by p q (τ, L)), which incorporates the time-lag factor and the fluctuation amplitude information into the analogous height cross-correlation analysis coefficient. Numerical tests are performed to illustrate that the newly proposed coefficient ρ q (τ, L) can be used to detect cross-correlations between two series with time lags and to identify different range of fluctuations at which two series possess cross-correlations. Applying the new coefficient to analyze the time-dependent cross-correlations of PM2.5 series between Beijing and the three neighboring cities of Tianjin, Zhangjiakou, and Baoding, we find that time lags between the PM2.5 series with larger fluctuations are longer than those between PM2.5 series withsmaller fluctuations. Our analysis also shows that cross-correlations between the PM2.5 series of two neighboring cities are significant and the time lags between two PM2.5 series of neighboring cities are significantly non-zero. These findings providenew scientific support on the view that air pollution in neighboring cities can affect one another not simultaneously but with a time lag.

  16. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  17. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  18. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  19. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  20. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  1. 12 CFR 308.303 - Filing of safety and soundness compliance plan.

    Science.gov (United States)

    2010-01-01

    ... time within which those steps will be taken. (c) Review of safety and soundness compliance plans... PRACTICE RULES OF PRACTICE AND PROCEDURE Submission and Review of Safety and Soundness Compliance Plans and... compliance plan. (a) Schedule for filing compliance plan—(1) In general. A bank shall file a written safety...

  2. 12 CFR 263.303 - Filing of safety and soundness compliance plan.

    Science.gov (United States)

    2010-01-01

    ... member bank will take to correct the deficiency and the time within which those steps will be taken. (c... FEDERAL RESERVE SYSTEM RULES OF PRACTICE FOR HEARINGS Submission and Review of Safety and Soundness... safety and soundness compliance plan. (a) Schedule for filing compliance plan—(1) In general. A State...

  3. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  4. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  5. Constructions complying with tightened Danish sound insulation requirements for new housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Hoffmeyer, Dan

    New sound insulation requirements in Denmark in 2008 New Danish Building Regulations with tightened sound insulation requirements were introduced in 2008 (and in 2010 with unchanged acoustic requirements). Compared to the Building Regulations from 1995, the airborne sound insulation requirements...... were 2 –3 dB stricter and the impact sound insulation requirements 5 dB stricter. The limit values are given using the descriptors R’w and L’n,w as before. For the first time, acoustic requirements for dwellings are not found as figures in the Building Regulations. Instead, it is stated......), Denmark. [2] "Lydisolering mellem boliger – Nybyggeri" (Sound insulation between dwellings – Newbuild)". Publication expected in April 2011. The guideline is a part of a series of seven new SBi acoustic guidelines. Project leader Birgit Rasmussen. The series shall replace the existing guidelines 1984...

  6. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  7. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  8. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  9. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  10. Memory for environmental sounds in sighted, congenitally blind and late blind adults: evidence for cross-modal compensation.

    Science.gov (United States)

    Röder, Brigitte; Rösler, Frank

    2003-10-01

    Several recent reports suggest compensatory performance changes in blind individuals. It has, however, been argued that the lack of visual input leads to impoverished semantic networks resulting in the use of data-driven rather than conceptual encoding strategies on memory tasks. To test this hypothesis, congenitally blind and sighted participants encoded environmental sounds either physically or semantically. In the recognition phase, both conceptually as well as physically distinct and physically distinct but conceptually highly related lures were intermixed with the environmental sounds encountered during study. Participants indicated whether or not they had heard a sound in the study phase. Congenitally blind adults showed elevated memory both after physical and semantic encoding. After physical encoding blind participants had lower false memory rates than sighted participants, whereas the false memory rates of sighted and blind participants did not differ after semantic encoding. In order to address the question if compensatory changes in memory skills are restricted to critical periods during early childhood, late blind adults were tested with the same paradigm. When matched for age, they showed similarly high memory scores as the congenitally blind. These results demonstrate compensatory performance changes in long-term memory functions due to the loss of a sensory system and provide evidence for high adaptive capabilities of the human cognitive system.

  11. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  12. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  13. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Science.gov (United States)

    Kaamin, Masiri; Farah Atiqah Ahmad, Nor; Ngadiman, Norhayati; Kadir, Aslila Abdul; Razali, Siti Nooraiin Mohd; Mokhtar, Mardiha; Sahat, Suhaila

    2018-03-01

    Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT). Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  14. Study on The Effectiveness of Egg Tray and Coir Fibre as A Sound Absorber

    Directory of Open Access Journals (Sweden)

    Kaamin Masiri

    2018-01-01

    Full Text Available Sound or noise pollution has become one major issues to the community especially those who lived in the urban areas. It does affect the activity of human life. This excessive noise is mainly caused by machines, traffic, motor vehicles and also any unwanted sounds that coming from outside and even from the inside of the building. Such as a loud music. Therefore, the installation of sound absorption panel is one way to reduce the noise pollution inside a building. The selected material must be a porous and hollow in order to absorb high frequency sound. This study was conducted to evaluate the potential of egg tray and coir fibre as a sound absorption panel. The coir fibre has a good coefficient value which make it suitable as a sound absorption material and can replace the traditional material; syntactic and wooden material. The combination of pyramid shape of egg tray can provide a large surface for uniform sound reflection. This study was conducted by using a panel with size 1 m x 1 m with a thickness of 6 mm. This panel consist of egg tray layer, coir fibre layer and a fabric as a wrapping for the aesthetic value. Room reverberation test has been carried to find the loss of reverberation time (RT. Result shows that, a reverberation time reading is on low frequency, which is 125 Hz to 1600 Hz. Within these frequencies, this panel can shorten the reverberation time of 5.63s to 3.60s. Hence, from this study, it can be concluded that the selected materials have the potential as a good sound absorption panel. The comparison is made with the previous research that used egg tray and kapok as a sound absorption panel.

  15. Low-frequency and multiple-bands sound insulation using hollow boxes with membrane-type faces

    Science.gov (United States)

    Yu, Wei-wei; Fan, Li; Ma, Ren-hao; Zhang, Hui; Zhang, Shu-yi

    2018-04-01

    Hollow boxes with their faces made up of elastic membranes are used to block acoustic waves. It is demonstrated that placing a cuboid membrane-type box inside a pipe can effectively insulate acoustic waves even if the box is smaller than the cross-section of the pipe. The sound insulation is achieved within multiple frequency-bands below 500 Hz based on different mechanisms, which originate from the coaction of the cavity, membrane-type faces, and the intervals between the box and pipe walls. Furthermore, by adjusting the structural parameters and establishing an array of boxes, we can achieve better sound insulation at more frequency-bands.

  16. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  17. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  18. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  19. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  20. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2006

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2006-08-01

    In the beginning of summer 2006 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (called also Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated in 2005. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings was 48 but at 8 stations the measurement did not succeed because of strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2006 shows small differences at some sounding sites. (orig.)

  1. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  2. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  3. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  4. Sound Attenuation in Elliptic Mufflers Using a Regular Perturbation Method

    OpenAIRE

    Banerjee, Subhabrata; Jacobi, Anthony M.

    2012-01-01

    The study of sound attenuation in an elliptical chamber involves the solution of the Helmholtz equation in elliptic coordinate systems. The Eigen solutions for such problems involve the Mathieu and the modified Mathieu functions. The computation of such functions poses considerable challenge. An alternative method to solve such problems had been proposed in this paper. The elliptical cross-section of the muffler has been treated as a perturbed circle, enabling the use of a regular perturbatio...

  5. Spatial avoidance to experimental increase of intermittent and continuous sound in two captive harbour porpoises.

    Science.gov (United States)

    Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans

    2018-02-01

    The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  7. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  8. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2007

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2007-09-01

    In the beginning of June 2007 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated after it yearly. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings stations is 48. In 2007 at 8 sounding stations the transmitter and/or receiver sites were changed and the line L11.400 was substituted by line L11.500. Some changes helped but anyway there were 6 stations that could not be measured because of the strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2007 shows small differences at some sounding sites. (orig.)

  9. The German scientific balloon and sounding rocket programme

    International Nuclear Information System (INIS)

    Dahl, A.F.

    1980-01-01

    This report contains information on sounding rocket projects in the scientific field of astronomy, aeronomy, magnetosphere, and material science under microgravity. The scientific balloon projects are performed with emphasis on astronomical research. By means of tables it is attempted to give a survey, as complete as possible, of the projects the time since the last symposium in Ajaccio, Corsica, and of preparations and plans for the future until 1983. The scientific balloon and sounding rocket projects form a small successful part of the German space research programme. (Auth.)

  10. Extraordinary absorption of sound in porous lamella-crystals

    DEFF Research Database (Denmark)

    Christensen, Johan; Romero-García, V.; Picó, R.

    2014-01-01

    . Experimental measurements show that strong all-angle sound absorption with almost zero reflectance takes place for a frequency range exceeding two octaves. We demonstrate that lowering the crystal filling fraction increases the wave interaction time and is responsible for the enhancement of intrinsic material......We present the design of a structured material supporting complete absorption of sound with a broadband response and functional for any direction of incident radiation. The structure which is fabricated out of porous lamellas is arranged into a low-density crystal and backed by a reflecting support...

  11. Real-ear acoustical characteristics of impulse sound generated by golf drivers and the estimated risk to hearing: a cross-sectional study.

    Science.gov (United States)

    Zhao, Fei; Bardsley, Barry

    2014-01-21

    This study investigated real-ear acoustical characteristics in terms of the sound pressure levels (SPLs) and frequency responses in situ generated from golf club drivers at impact with a golf ball. The risk of hearing loss caused by hitting a basket of golf balls using various drivers was then estimated. Cross-sectional study. The three driver clubs were chosen on the basis of reflection of the commonality and modern technology of the clubs. The participants were asked to choose the clubs in a random order and hit six two-piece range golf balls with each club. The experiment was carried out at a golf driving range in South Wales, UK. 19 male amateur golfers volunteered to take part in the study, with an age range of 19-54 years. The frequency responses and peak SPLs in situ of the transient sound generated from the club at impact were recorded bilaterally and simultaneously using the GN Otometric Freefit wireless real-ear measurement system. A swing speed radar system was also used to investigate the relationship between noise level and swing speed. Different clubs generated significantly different real-ear acoustical characteristics in terms of SPL and frequency responses. However, they did not differ significantly between the ears. No significant correlation was found between the swing speed and noise intensity. On the basis of the SPLs measured in the present study, the percentage of daily noise exposure for hitting a basket of golf balls using the drivers described above was less than 2%. The immediate danger of noise-induced hearing loss for amateur golfers is quite unlikely. However, it may be dangerous to hearing if the noise level generated by the golf clubs exceeded 116 dBA.

  12. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Science.gov (United States)

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  13. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Directory of Open Access Journals (Sweden)

    Matthew K Pine

    Full Text Available It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  14. Low frequency sound field control in rectangular listening rooms using CABS (Controlled Acoustic Bass System) will also reduce sound transmission to neighbor rooms

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal; Celestinos, Adrian

    2011-01-01

    Sound reproduction is often taking place in small and medium sized rectangular rooms. As rectangular rooms have 3 pairs of parallel walls the reflections at especially low frequencies will cause up to 30 dB spatial variations of the sound pressure level in the room. This will take place not only...... at resonance frequencies, but more or less at all frequencies. A time based room correction system named CABS (Controlled Acoustic Bass System) has been developed and is able to create a homogeneous sound field in the whole room at low frequencies by proper placement of multiple loudspeakers. A normal setup...... from the rear wall, and thereby leaving only the plane wave in the room. With a room size of (7.8 x 4.1 x 2.8) m. it is possible to prevent modal frequencies up to 100 Hz. An investigation has shown that the sound transmitted to a neighbour room also will be reduced if CABS is used. The principle...

  15. Improvement of directionality and sound-localization by internal ear coupling in barn owls

    DEFF Research Database (Denmark)

    Wagner, Hermann; Christensen-Dalsgaard, Jakob; Kettler, Lutz

    Mark Konishi was one of the first to quantify sound-localization capabilities in barn owls. He showed that frequencies between 3 and 10 kHz underlie precise sound localization in these birds, and that they derive spatial information from processing interaural time and interaural level differences....... However, despite intensive research during the last 40 years it is still unclear whether and how internal ear coupling contributes to sound localization in the barn owl. Here we investigated ear directionality in anesthetized birds with the help of laser vibrometry. Care was taken that anesthesia...... time difference in the low-frequency range, barn owls hesitate to approach prey or turn their heads when only low-frequency auditory information is present in a stimulus they receive. Thus, the barn-owl's sound localization system seems to be adapted to work best in frequency ranges where interaural...

  16. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  17. Just-in-Time Retail Distribution : A Systems Perspective on Cross-Docking

    NARCIS (Netherlands)

    Buijs, Paul; Danhof, Hans W.; Wortmann, J.(Hans) C.

    2016-01-01

    Cross-docking is a just-in-time strategy for distribution logistics. It is aimed at reducing inventory levels and distribution lead times by creating a seamless flow of products from suppliers to customers. Prior supply chain literature has argued that creating such a seamless product flows requires

  18. Detrended partial cross-correlation analysis of two nonstationary time series influenced by common external forces

    Science.gov (United States)

    Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2015-06-01

    When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.

  19. Commercial border crossing and wait time measurement at the Pharr-Reynosa International Bridge.

    Science.gov (United States)

    2010-11-01

    The objective of the research described in this report is to install and implement radio frequency : identification (RFID) technology to measure border crossing time and travel delay for : commercial trucks crossing from Mexico into Texas at the Phar...

  20. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  1. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  2. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  3. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  4. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  5. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  6. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  7. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  8. Effect of the radiofrequency volumetric tissue reduction of inferior turbinate on expiratory nasal sound frequency.

    Science.gov (United States)

    Seren, Erdal

    2009-01-01

    We sought to evaluate the short-term efficacy of radiofrequency volumetric tissue reduction (RFVTR) in treatment of inferior turbinate hypertrophy (TH) as measured by expiratory nasal sound spectra. In our study, we aimed to investigate the Odiosoft-rhino (OR) as a new diagnostic method to evaluate the nasal airflow of patients before and after RFVTR. In this study, we have analyzed and recorded the expiratory nasal sound in patients with inferior TH before and after RFVTR. This analysis includes the time expanded waveform, the spectral analysis with time averaged fast Fourier transform (FFT), and the waveform analysis of nasal sound. We found an increase in sound intensity at high frequency (Hf) in the sound analyses of the patients before RFVTR and a decrease in sound intensity at Hf was found in patients after RFVTR. This study indicates that RFVTR is an effective procedure to improve nasal airflow in the patients with nasal obstruction with inferior TH. We found significant decreases in the sound intensity level at Hf in the sound spectra after RFVTR. The OR results from the 2000- to 4000-Hz frequency (Hf) interval may be more useful in assessing patients with nasal obstruction than other frequency intervals. OR may be used as a noninvasive diagnostic tool to evaluate the nasal airflow.

  9. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  10. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all...

  11. Adaptive sound speed correction for abdominal ultrasonography: preliminary results

    Science.gov (United States)

    Jin, Sungmin; Kang, Jeeun; Song, Tai-Kyung; Yoo, Yangmo

    2013-03-01

    Ultrasonography has been conducting a critical role in assessing abdominal disorders due to its noninvasive, real-time, low cost, and deep penetrating capabilities. However, for imaging obese patients with a thick fat layer, it is challenging to achieve appropriate image quality with a conventional beamforming (CON) method due to phase aberration caused by the difference between sound speeds (e.g., 1580 and 1450m/s for liver and fat, respectively). For this, various sound speed correction (SSC) methods that estimate the accumulated sound speed for a region-of interest (ROI) have been previously proposed. However, with the SSC methods, the improvement in image quality was limited only for a specific depth of ROI. In this paper, we present the adaptive sound speed correction (ASSC) method, which can enhance the image quality for whole depths by using estimated sound speeds from two different depths in the lower layer. Since these accumulated sound speeds contain the respective contributions of layers, an optimal sound speed for each depth can be estimated by solving contribution equations. To evaluate the proposed method, the phantom study was conducted with pre-beamformed radio-frequency (RF) data acquired with a SonixTouch research package (Ultrasonix Corp., Canada) with linear and convex probes from the gel pad-stacked tissue mimicking phantom (Parker Lab. Inc., USA and Model539, ATS, USA) whose sound speeds are 1610 and 1450m/s, respectively. From the study, compared to the CON and SSC methods, the ASSC method showed the improved spatial resolution and information entropy contrast (IEC) for convex and linear array transducers, respectively. These results indicate that the ASSC method can be applied for enhancing image quality when imaging obese patients in abdominal ultrasonography.

  12. Reflector construction by sound path curves - A method of manual reflector evaluation in the field

    International Nuclear Information System (INIS)

    Siciliano, F.; Heumuller, R.

    1985-01-01

    In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points

  13. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  14. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    Science.gov (United States)

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  15. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  16. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  17. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  18. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  19. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  20. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  1. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  2. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    Science.gov (United States)

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  3. Reduction of noise in the neonatal intensive care unit using sound-activated noise meters.

    Science.gov (United States)

    Wang, D; Aubertin, C; Barrowman, N; Moreau, K; Dunn, S; Harrold, J

    2014-11-01

    To determine if sound-activated noise meters providing direct audit and visual feedback can reduce sound levels in a level 3 neonatal intensive care unit (NICU). Sound levels (in dB) were compared between a 2-month period with noise meters present but without visual signal fluctuation and a subsequent 2 months with the noise meters providing direct audit and visual feedback. There was a significant increase in the percentage of time the sound level in the NICU was below 50 dB across all patient care areas (9.9%, 8.9% and 7.3%). This improvement was not observed in the desk area where there are no admitted patients. There was no change in the percentage of time the NICU was below 45 or 55 dB. Sound-activated noise meters seem effective in reducing sound levels in patient care areas. Conversations may have moved to non-patient care areas preventing a similar change there. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  5. Cardiovascular Sound and the Stethoscope, 1816 to 2016

    Science.gov (United States)

    Segall, Harold N.

    1963-01-01

    Cardiovascular sound escaped attention until Laennec invented and demonstrated the usefulness of the stethoscope. Accuracy of diagnosis using cardiovascular sounds as clues increased with improvement in knowledge of the physiology of circulation. Nearly all currently acceptable clinicopathological correlations were established by physicians who used the simplest of stethoscopes or listened with the bare ear. Certain refinements followed the use of modern methods which afford greater precision in timing cardiovascular sounds. These methods contribute to educating the human ear, so that those advantages may be applied which accrue from auscultation, plus the method of writing quantitative symbols to describe what is heard, by focusing the sense of hearing on each segment of the cardiac cycle in turn. By the year 2016, electronic systems of collecting and analyzing data about the cardiovascular system may render the stethoscope obsolete. ImagesFig. 1Fig. 2Fig. 3Fig. 5Fig. 8 PMID:13987676

  6. Spatial filtering of audible sound with acoustic landscapes

    Science.gov (United States)

    Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun; Cheng, Jianchun

    2017-07-01

    Acoustic metasurfaces manipulate waves with specially designed structures and achieve properties that natural materials cannot offer. Similar surfaces work in audio frequency range as well and lead to marvelous acoustic phenomena that can be perceived by human ears. Being intrigued by the famous Maoshan Bugle phenomenon, we investigate large scale metasurfaces consisting of periodic steps of sizes comparable to the wavelength of audio frequency in both time and space domains. We propose a theoretical method to calculate the scattered sound field and find that periodic corrugated surfaces work as spatial filters and the frequency selective character can only be observed at the same side as the incident wave. The Maoshan Bugle phenomenon can be well explained with the method. Finally, we demonstrate that the proposed method can be used to design acoustical landscapes, which transform impulsive sound into famous trumpet solos or other melodious sound.

  7. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  8. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  9. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  10. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    Science.gov (United States)

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    Bowel dysfunction remains a major problem in neonates. Traditional auscultation of bowel sounds as a diagnostic aid in neonatal gastrointestinal complications is limited by skill and inability to document and reassess. Consequently, we built a unique prototype to investigate the feasibility of an electronic monitoring system for continuous assessment of bowel sounds. We attained approval by the Institutional Review Boards for the investigational study to test our system. The system incorporated a prototype stethoscope head with a built-in microphone connected to a digital recorder. Recordings made over extended periods were evaluated for quality. We also considered the acoustic environment of the hospital, where the stethoscope was used. The stethoscope head was attached to the abdomen with a hydrogel patch designed especially for this purpose. We used the system to obtain recordings from eight healthy, full-term babies. A scoring system was used to determine loudness, clarity, and ease of recognition comparing it to the traditional stethoscope. The recording duration was initially two hours and was increased to a maximum of eight hours. Median duration of attachment was three hours (3.75, 2.68). Based on the scoring, the bowel sound recording was perceived to be as loud and clear in sound reproduction as a traditional stethoscope. We determined that room noise and other noises were significant forms of interference in the recordings, which at times prevented analysis. However, no sound quality drift was noted in the recordings and no patient discomfort was noted. Minimal erythema was observed over the fixation site which subsided within one hour. We demonstrated the long-term recording of infant bowel sounds. Our contributions included a prototype stethoscope head, which was affixed using a specially designed hydrogel adhesive patch. Such a recording can be reviewed and reassessed, which is new technology and an improvement over current practice. The use of this

  11. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  12. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  13. Cross-Sensory Correspondences: Heaviness is Dark and Low-Pitched.

    Science.gov (United States)

    Walker, Peter; Scallon, Gabrielle; Francis, Brian

    2017-07-01

    Everyday language reveals how stimuli encoded in one sensory feature domain can possess qualities normally associated with a different domain (e.g., higher pitch sounds are bright, light in weight, sharp, and thin). Such cross-sensory associations appear to reflect crosstalk among aligned (corresponding) feature dimensions, including brightness, heaviness, and sharpness. Evidence for heaviness being one such dimension is very limited, with heaviness appearing primarily as a verbal associate of other feature contrasts (e.g., darker objects and lower pitch sounds are heavier than their opposites). Given the presumed bidirectionality of the crosstalk between corresponding dimensions, heaviness should itself induce the cross-sensory associations observed elsewhere, including with brightness and pitch. Taking care to dissociate effects arising from the size and mass of an object, this is confirmed. When hidden objects varying independently in size and mass are lifted, objects that feel heavier are judged to be darker and to make lower pitch sounds than objects feeling less heavy. These judgements track the changes in perceived heaviness induced by the size-weight illusion. The potential involvement of language, natural scene statistics, and Bayesian processes in correspondences, and the effects they induce, is considered.

  14. Border Crossing/Entry Data - Border Crossing/Entry Data Time Series tool

    Data.gov (United States)

    Department of Transportation — The dataset is known as “Border Crossing/Entry Data.” The Bureau of Transportation Statistics (BTS) Border Crossing/Entry Data provides summary statistics to the...

  15. The German scientific balloon and sounding rocket projects

    International Nuclear Information System (INIS)

    Dalh, A.F.

    1978-01-01

    This report contains information on the sounding rocket projects: experiment preparation for spacelab (astronomy), aeronomy, magnetosphere, and material science. Except for material science the scientific balloon projects are performed in the some scientific fields, but with a strong emphasis on astronomical research. It is tried to provide by means of tables a survey as complete as possible of the projects for the time since the last symposium in Elmau and of the plans for the future until 1981. The scientific balloon and sounding rocket projects form a small succesful part of the German space research programme. (author)

  16. Active acoustic leak detection for LMFBR steam generator. Sound attenuation due to bubbles

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi; Sakuma, Toshio

    1995-01-01

    In the steam generators (SG) of LMFBR, it is necessary to detect the leakage of water from tubes of heat exchangers as soon as it occurs. The active acoustic detection method has drawn general interest owing to its short response time and reduction of the influence of background noise. In this paper, the application of the active acoustic detection method for SG is proposed, and sound attenuation by bubbles is investigated experimentally. Furthermore, using the SG sector model, sound field characteristics and sound attenuation characteristics due to injection of bubbles are studied. It is clarified that the sound attenuation depends upon bubble size as well as void fraction, that the distance attenuation of sound in the SG model containing heat transfer tubes is 6dB for each two-fold increase of distance, and that emitted sound attenuates immediately upon injection of bubbles. (author)

  17. Advantages and disadvantages : longitudinal vs. repeated cross-section surveys

    Science.gov (United States)

    1996-06-20

    The benefits of a longitudinal analysis over a repeated cross-sectional study include increased statistical power and the capability to estimate a greater range of conditional probabilities. With the Puget Sound Transportation Panel (PSTP), and any s...

  18. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  19. Observations of volcanic plumes using small balloon soundings

    Science.gov (United States)

    Voemel, H.

    2015-12-01

    Eruptions of volcanoes are very difficult to predict and for practical purposes may occur at any time. Any observing system intending to observe volcanic eruptions has to be ready at any time. Due to transport time scales, emissions of large volcanic eruptions, in particular injections into the stratosphere, may be detected at locations far from the volcano within days to weeks after the eruption. These emissions may be observed using small balloon soundings at dedicated sites. Here we present observations of particles of the Icelandic Grimsvotn eruption at the Meteorological Observatory Lindenberg, Germany in the months following the eruption and observations of opportunity of other volcanic particle events. We also present observations of the emissions of SO2 from the Turrialba volcano at San Jose, Costa Rica. We argue that dedicated sites for routine observations of the clean and perturbed atmosphere using small sounding balloons are an important element in the detection and quantification of emissions from future volcanic eruptions.

  20. Sound velocity of tantalum under shock compression in the 18–142 GPa range

    Energy Technology Data Exchange (ETDEWEB)

    Xi, Feng, E-mail: xifeng@caep.cn; Jin, Ke; Cai, Lingcang, E-mail: cai-lingcang@aliyun.com; Geng, Huayun; Tan, Ye; Li, Jun [National Key Laboratory of Shock Waves and Detonation Physics, Institute of Fluid Physics, CAEP, P.O. Box 919-102 Mianyang, Sichuan 621999 (China)

    2015-05-14

    Dynamic compression experiments of tantalum (Ta) within a shock pressure range from 18–142 GPa were conducted driven by explosive, a two-stage light gas gun, and a powder gun, respectively. The time-resolved Ta/LiF (lithium fluoride) interface velocity profiles were recorded with a displacement interferometer system for any reflector. Sound velocities of Ta were obtained from the peak state time duration measurements with the step-sample technique and the direct-reverse impact technique. The uncertainty of measured sound velocities were analyzed carefully, which suggests that the symmetrical impact method with step-samples is more accurate for sound velocity measurement, and the most important parameter in this type experiment is the accurate sample/window particle velocity profile, especially the accurate peak state time duration. From these carefully analyzed sound velocity data, no evidence of a phase transition was found up to the shock melting pressure of Ta.

  1. Empirical Analysis and Modeling of Stop-Line Crossing Time and Speed at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Keshuang Tang

    2016-12-01

    Full Text Available In China, a flashing green (FG indication of 3 s followed by a yellow (Y indication of 3 s is commonly applied to end the green phase at signalized intersections. Stop-line crossing behavior of drivers during such a phase transition period significantly influences safety performance of signalized intersections. The objective of this study is thus to empirically analyze and model drivers’ stop-line crossing time and speed in response to the specific phase transition period of FG and Y. High-resolution trajectories for 1465 vehicles were collected at three rural high-speed intersections with a speed limit of 80 km/h and two urban intersections with a speed limit of 50 km/h in Shanghai. With the vehicle trajectory data, statistical analyses were performed to look into the general characteristics of stop-line crossing time and speed at the two types of intersections. A multinomial logit model and a multiple linear regression model were then developed to predict the stop-line crossing patterns and speeds respectively. It was found that the percentage of stop-line crossings during the Y interval is remarkably higher and the stop-line crossing time is approximately 0.7 s longer at the urban intersections, as compared with the rural intersections. In addition, approaching speed and distance to the stop-line at the onset of FG as well as area type significantly affect the percentages of stop-line crossings during the FG and Y intervals. Vehicle type and stop-line crossing pattern were found to significantly influence the stop-line crossing speed, in addition to the above factors. The red-light-running seems to occur more frequently at the large intersections with a long cycle length.

  2. Jump in the amplitude of a sound wave associated with contraction of a nitrogen discharge

    International Nuclear Information System (INIS)

    Galechyan, G.A.; Mkrtchyan, A.R.; Tavakalyan, L.B.

    1993-01-01

    The use of a sound wave created by an external source and directed along the positive column of a nitrogen discharge in order to make the discharge pass to the contracted state is studied experimentally. A phenomenon involving a jump in the sound wave amplitude, caused by the discharge contraction, is observed and studied. It is established that the amplitude of the sound wave as a function of the discharge current near the jump exhibits hysteresis. It is shown that in the field of a high-intensity sound wave causing the discharge to expand eliminates the jump in the sound amplitude. The dependence of the growth time of the sound amplitude caused by the jump in this quantity on the sound wave intensity is determined. 24 refs., 4 figs., 1 tab

  3. The influence of neonatal intensive care unit design on sound level.

    Science.gov (United States)

    Chen, Hsin-Li; Chen, Chao-Huei; Wu, Chih-Chao; Huang, Hsiu-Jung; Wang, Teh-Ming; Hsu, Chia-Chi

    2009-12-01

    Excessive noise in nurseries has been found to cause adverse effects in infants, especially preterm infants in neonatal intensive care units (NICUs). The NICU design may influence the background sound level. We compared the sound level in two differently designed spaces in one NICU. We hypothesized that the sound level in an enclosed space would be quieter than in an open space. Sound levels were measured continuously 24 hours a day in two separate spaces at the same time, one enclosed and one open. Sound-level meters were placed near beds in each room. Sound levels were expressed as decibels, A-weighted (dBA) and presented as hourly L(eq), L(max), L(10), and L(90). The hourly L(eq) in the open space (50.8-57.2dB) was greater than that of the enclosed space (45.9-51.7dB), with a difference of 0.4-10.4dB, and a mean difference of 4.5dB (p<0.0001). The hourly L(10), L(90), and L(max) in the open space also exceeded that in the enclosed space (p<0.0001). The sound level measured in the enclosed space was quieter than in the open space. The design of bed space should be taken into consideration when building a new NICU. Besides the design of NICU architecture, continuous monitoring of sound level in the NICU is important to maintain a quiet environment.

  4. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  5. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  6. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  7. Applying the EBU R128 loudness standard in live-streaming sound sculptures

    DEFF Research Database (Denmark)

    Højlund, Marie Koldkjær; Riis, Morten S.; Rothmann, Daniel

    2017-01-01

    to preserve a natural sounding dynamic image from the varying sound sources that can be played back under varying conditions, an adaptation of the EBU R128 loudness measurement recommendation, originally developed for levelling non-real-time broadcast material, has been applied. The paper describes the Pure......This paper describes the development of a loudness-based compressor for live audio streams. The need for this device arose while developing the public sound art project The Overheard, which involves mixing together several live audio streams through a web based mixing interface. In order...

  8. Application of Powell's analogy for the prediction of vortex-pairing sound in a low-Mach number jet based on time-resolved planar and tomographic PIV

    NARCIS (Netherlands)

    Violato, D.; Bryon, K.; Moore, P.; Scarano, F.

    2010-01-01

    This paper describes an experimental investigation by time-resolved planar and tomographic PIV on the sound production mechanism of vortex pairing of a transitional water-jet flow at Re=5000. The shear layer is characterized by axisymmetric vortex rings which undergo pairing with a varicose mode.

  9. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  10. Curating sound performance as laboratories of envisioning

    DEFF Research Database (Denmark)

    Holmboe, Rasmus

    This paper is based on my dissertation research that investigates how sound performance can be presented and represented - in real time, as well as in and through the archive. This double perspective opens a field of curatorial problems related to the simultaneous movements of both envisioning...

  11. Hydrographic trends in Prince William Sound, Alaska, 1960-2016

    Science.gov (United States)

    Campbell, Robert W.

    2018-01-01

    A five-decade time series of temperature and salinity profiles within Prince William Sound (PWS) and the immediately adjacent shelf was assembled from several archives and ongoing field programs, and augmented with archived SST observations. Observations matched with recent cool (2007-2013) and warm (2013-onward) periods in the region, and also showed an overall regional warming trend ( 0.1 to 0.2 °C decade-1) that matched long-term increases in heat transport to the surface ocean. A cooling and freshening trend ( - 0.2 °C decade-1 and 0.02 respectively) occurred in the near surface waters in some portions of PWS, particularly the northwestern margin, which is also the location of most of the ice mass in the region; discharge (estimated from other studies) has increased over time, suggesting that those patterns were due to increased meltwater inputs. Increases in salinity at depth were consistent with enhanced entrainment of deep water by estuarine circulations, and by enhanced deep water renewal caused by reductions in downwelling-favorable winds. As well as local-scale effects, temperature and salinity were positively cross correlated with large scale climate and lunar indexes at long lags (years to months), indicating the longer time scales of atmospheric and transport connections with the Gulf of Alaska. Estimates of mixed layer depths show a shoaling of the seasonal mixed layer over time by several meters, which may have implications for ecosystem productivity in the region.

  12. Preparation of steel slag porous sound-absorbing material using coal powder as pore former.

    Science.gov (United States)

    Sun, Peng; Guo, Zhancheng

    2015-10-01

    The aim of the study was to prepare a porous sound-absorbing material using steel slag and fly ash as the main raw material, with coal powder and sodium silicate used as a pore former and binder respectively. The influence of the experimental conditions such as the ratio of fly ash, sintering temperature, sintering time, and porosity regulation on the performance of the porous sound-absorbing material was investigated. The results showed that the specimens prepared by this method had high sound absorption performance and good mechanical properties, and the noise reduction coefficient and compressive strength could reach 0.50 and 6.5MPa, respectively. The compressive strength increased when the dosage of fly ash and sintering temperature were raised. The noise reduction coefficient decreased with increasing ratio of fly ash and reducing pore former, and first increased and then decreased with the increase of sintering temperature and time. The optimum preparation conditions for the porous sound-absorbing material were a proportion of fly ash of 50% (wt.%), percentage of coal powder of 30% (wt.%), sintering temperature of 1130°C, and sintering time of 6.0hr, which were determined by analyzing the properties of the sound-absorbing material. Copyright © 2015. Published by Elsevier B.V.

  13. Low frequency sound field enhancement system for rectangular rooms, using multiple loudspeakers

    DEFF Research Database (Denmark)

    Celestinos, Adrian

    2007-01-01

    The scope of this PhD dissertation is within the performance of loudspeakers in rooms at low frequencies. The research concentrates on the improvement of the sound level distribution in rooms produced by loudspeakers at low frequencies. The work focuses on seeing the problem acoustically...... and solving it in the time domain. Loudspeakers are the last link in the sound reproduction chain, and they are typically placed in small or medium size rooms. When low frequency sound is radiated by a loudspeaker the sound level distribution along the room presents large deviations. This is due...... to the multiple reflection of sound at the rigid walls of the room. This may cause level differences of up to 20 dB in the room. Some of these deviations are associated with the standing waves, resonances or anti resonances of the room. The understanding of the problem is accomplished by analyzing the behavior...

  14. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  15. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  16. Sound card based digital correlation detection of weak photoelectrical signals

    International Nuclear Information System (INIS)

    Tang Guanghui; Wang Jiangcheng

    2005-01-01

    A simple and low-cost digital correlation method is proposed to investigate weak photoelectrical signals, using a high-speed photodiode as detector, which is directly connected to a programmably triggered sound card analogue-to-digital converter and a personal computer. Two testing experiments, autocorrelation detection of weak flickering signals from a computer monitor under background of noisy outdoor stray light and cross-correlation measurement of the surface velocity of a motional tape, are performed, showing that the results are reliable and the method is easy to implement

  17. Linear theory of sound waves with evaporation and condensation

    International Nuclear Information System (INIS)

    Inaba, Masashi; Watanabe, Masao; Yano, Takeru

    2012-01-01

    An asymptotic analysis of a boundary-value problem of the Boltzmann equation for small Knudsen number is carried out for the case when an unsteady flow of polyatomic vapour induces reciprocal evaporation and condensation at the interface between the vapour and its liquid phase. The polyatomic version of the Boltzmann equation of the ellipsoidal statistical Bhatnagar–Gross–Krook (ES-BGK) model is used and the asymptotic expansions for small Knudsen numbers are applied on the assumptions that the Mach number is sufficiently small compared with the Knudsen number and the characteristic length scale divided by the characteristic time scale is comparable with the speed of sound in a reference state, as in the case of sound waves. In the leading order of approximation, we derive a set of the linearized Euler equations for the entire flow field and a set of the boundary-layer equations near the boundaries (the vapour–liquid interface and simple solid boundary). The boundary conditions for the Euler and boundary-layer equations are obtained at the same time when the solutions of the Knudsen layers on the boundaries are determined. The slip coefficients in the boundary conditions are evaluated for water vapour. A simple example of the standing sound wave in water vapour bounded by a liquid water film and an oscillating piston is demonstrated and the effect of evaporation and condensation on the sound wave is discussed. (paper)

  18. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  19. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  20. Recovery of indium from used LCD panel by a time efficient and environmentally sound method assisted HEBM

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Cheol-Hee; Jeong, Mi-Kyung [Division of Advanced Materials Engineering and Institute for Rare Metals, Kongju National University, Cheonan 331-717 (Korea, Republic of); Fatih Kilicaslan, M. [Department of Physics, Faculty of Art and Science, Kastamonu University, Kastamonu (Turkey); Lee, Jong-Hyeon [Graduate School of Green Energy Technology and Department of Nanomaterials Engineering, Chungnam National University, 79 Daehak-ro, Yuseong-gu, Dajeon 305-764 (Korea, Republic of); Hong, Hyun-Seon [Advanced Materials and Processing Center, Institute for Advanced Engineering (IAE), Yongin 449-863 (Korea, Republic of); Hong, Soon-Jik, E-mail: hongsj@kongju.ac.kr [Division of Advanced Materials Engineering and Institute for Rare Metals, Kongju National University, Cheonan 331-717 (Korea, Republic of)

    2013-03-15

    Highlights: ► In this study, we recovered indium from a waste LCD panel. ► The ITO glass was milled to obtain micron size particles in a HEBM machine. ► Effect of particle size of ITO glass on the amount of dissolved In was investigated. ► In a very short time, a considerable amount of In was recovered. ► Amount of HCl in acid solution was decreased to 40 vol.%. - Abstract: In this study, a method which is environmentally sound, time and energy efficient has been used for recovery of indium from used liquid crystal display (LCD) panels. In this method, indium tin oxide (ITO) glass was crushed to micron size particles in seconds via high energy ball milling (HEBM). The parameters affecting the amount of dissolved indium such as milling time, particle size, effect time of acid solution, amount of HCl in the acid solution were tried to be optimized. The results show that by crushing ITO glass to micron size particles by HEBM, it is possible to extract higher amount of indium at room temperature than that by conventional methods using only conventional shredding machines. In this study, 86% of indium which exists in raw materials was recovered about in a very short time.

  1. Coupled simulation of meteorological parameters and sound intensity in a narrow valley

    Energy Technology Data Exchange (ETDEWEB)

    Heimann, D. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Gross, G. [Hannover Univ. (Germany). Inst. fuer Meteorologie und Klimatologie

    1997-07-01

    A meteorological mesoscale model is used to simulate the inhomogeneous distribution of temperature and the appertaining development of thermal wind systems in a narrow two-dimensional valley during the course of a cloud-free day. A simple sound particle model takes up the simulated meteorological fields and calculates the propagation of noise which originates from a line source at one of the slopes of this valley. The coupled modeling system ensures consistency of topography, meteorological parameters and the sound field. The temporal behaviour of the sound intensity level across the valley is examined. It is only governed by the time-dependent meteorology. The results show remarkable variations of the sound intensity during the course of a day depending on the location in the valley. (orig.) 23 refs.

  2. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  3. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  4. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  5. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  6. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  7. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  8. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  9. Determining the Time of Flight and Speed of Sound on Different types of Edible Oil

    Science.gov (United States)

    Azman, N. A.; Hamid, S. B. Abd

    2017-11-01

    Edible oil is most often plant-based oils that have been extracted from various seeds. There are cases where the fully virgin edible oil was found to be a fraud. The adulterated edible oil indicates the intentional, fraudulent addition of extraneous, improper or cheaper ingredients puts into the oil or the dilution or removal of some valuable ingredient of the oil in order to increase profits. Hence, decrease the reliability of the Malaysian food product quality. This research was done by using the method of time of flight obtained using the Texas Instrument board, TDC1000-TDC7200 EVM connected to an ultrasonic transducer with 1 MHz frequency. The authors measured the time of flight and temperatures controlled from 20°C to 40°C of five vegetable oils (olive oil, sunflower oil, corn oil, coconut oil, and mustard oil). The value is observed and compared with other research from the literature review. From the study, time of flight values decreases exponentially while speed of sound value increases. This relationship will be useful in spectrum unfolding method to investigate the adulteration in different type of edible oil.This research outcome is to investigate the quality value of the different type of edible oil while eliminates the issues where the quality of Malaysian food product is not reliable.

  10. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  11. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2015-09-01

    Full Text Available Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  12. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  13. Digital Sound Encryption with Logistic Map and Number Theoretic Transform

    Science.gov (United States)

    Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT

    2018-03-01

    Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.

  14. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  15. Loss and persistence of implicit memory for sound: evidence from auditory stream segregation context effects.

    Science.gov (United States)

    Snyder, Joel S; Weintraub, David M

    2013-07-01

    An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners' perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.

  16. Mercury in Long Island Sound sediments

    Science.gov (United States)

    Varekamp, J.C.; Buchholtz ten Brink, Marilyn R.; Mecray, E.I.; Kreulen, B.

    2000-01-01

    Mercury (Hg) concentrations were measured in 394 surface and core samples from Long Island Sound (LIS). The surface sediment Hg concentration data show a wide spread, ranging from 600 ppb Hg in westernmost LIS. Part of the observed range is related to variations in the bottom sedimentary environments, with higher Hg concentrations in the muddy depositional areas of central and western LIS. A strong residual trend of higher Hg values to the west remains when the data are normalized to grain size. Relationships between a tracer for sewage effluents (C. perfringens) and Hg concentrations indicate that between 0-50 % of the Hg is derived from sewage sources for most samples from the western and central basins. A higher percentage of sewage-derived Hg is found in samples from the westernmost section of LIS and in some local spots near urban centers. The remainder of the Hg is carried into the Sound with contaminated sediments from the watersheds and a small fraction enters the Sound as in situ atmospheric deposition. The Hg-depth profiles of several cores have well-defined contamination profiles that extend to pre-industrial background values. These data indicate that the Hg levels in the Sound have increased by a factor of 5-6 over the last few centuries, but Hg levels in LIS sediments have declined in modern times by up to 30 %. The concentrations of C. perfringens increased exponentially in the top core sections which had declining Hg concentrations, suggesting a recent decline in Hg fluxes that are unrelated to sewage effluents. The observed spatial and historical trends show Hg fluxes to LIS from sewage effluents, contaminated sediment input from the Connecticut River, point source inputs of strongly contaminated sediment from the Housatonic River, variations in the abundance of Hg carrier phases such as TOC and Fe, and focusing of sediment-bound Hg in association with westward sediment transport within the Sound.

  17. Sleep disturbance caused by meaningful sounds and the effect of background noise

    Science.gov (United States)

    Namba, Seiichiro; Kuwano, Sonoko; Okamoto, Takehisa

    2004-10-01

    To study noise-induced sleep disturbance, a new procedure called "noise interrupted method"has been developed. The experiment is conducted in the bedroom of the house of each subject. The sounds are reproduced with a mini-disk player which has an automatic reverse function. If the sound is disturbing and subjects cannot sleep, they are allowed to switch off the sound 1 h after they start to try to sleep. This switch off (noise interrupted behavior) is an important index of sleep disturbance. Next morning they fill in a questionnaire in which quality of sleep, disturbance of sounds, the time when they switched off the sound, etc. are asked. The results showed a good relationship between L and the percentages of the subjects who could not sleep in an hour and between L and the disturbance reported in the questionnaire. This suggests that this method is a useful tool to measure the sleep disturbance caused by noise under well-controlled conditions.

  18. A Loudness Function for Maintaining Spectral Balance at Changing Sound Pressure Levels

    DEFF Research Database (Denmark)

    Nielsen, Sofus Birkedal

    Our perception of loudness is a function of frequency as well as sound pressure level as described in ISO226:2003: Normal Equal Loudness Level Contours, which describes the needed sound pressure level for pure tones to be perceived equally loud. At a music performance, this is taking care...... of by the sound engineer by listening to the individual sound sources and adjust and equalize them to the wanted spectral balance including the whole chain of audio equipment and surroundings. At a live venue the sound pressure level will normally change during a concert, and typically increase over time......B is doubling of the effect to the loudspeakers). A level depending digital loudness function has been made based on ISO226:2003, and will be demonstrated. It can maintain the spectral balance at alternating levels and is based on fractional order digital filters. Tutorial. Abstract T3.3 (30th August 16:00 - 17...

  19. Movement and Perceptual Strategies to Intercept Virtual Sound Sources.

    Directory of Open Access Journals (Sweden)

    Naeem eKomeilipoor

    2015-05-01

    Full Text Available To intercept a moving object, one needs to be in the right place at the right time. In order to do this, it is necessary to pick up and use perceptual information that specifies the time to arrival of an object at an interception point. In the present study, we examined the ability to intercept a laterally moving virtual sound object by controlling the displacement of a sliding handle and tested whether and how the interaural time difference (ITD could be the main source of perceptual information for successfully intercepting the virtual object. The results revealed that in order to accomplish the task, one might need to vary the duration of the movement, control the hand velocity and time to reach the peak velocity (speed coupling, while the adjustment of movement initiation did not facilitate performance. Furthermore, the overall performance was more successful when subjects employed a time-to-contact (tau coupling strategy. This result shows that prospective information is available in sound for guiding goal-directed actions.

  20. Audio Quality Assurance : An Application of Cross Correlation

    DEFF Research Database (Denmark)

    Jurik, Bolette Ammitzbøll; Nielsen, Jesper Asbjørn Sindahl

    2012-01-01

    We describe algorithms for automated quality assurance on content of audio files in context of preservation actions and access. The algorithms use cross correlation to compare the sound waves. They are used to do overlap analysis in an access scenario, where preserved radio broadcasts are used in...

  1. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  2. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  3. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  4. Puget Sound area electric reliability plan. Draft environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    The Puget Sound Area Electric Reliability Plan Draft Environmental Impact Statement (DEIS) identifies the alternatives for solving a power system problem in the Puget Sound area. This Plan is undertaken by Bonneville Power Administration (BPA), Puget Sound Power & Light, Seattle City Light, Snohomish Public Utility District No. 1 (PUD), and Tacoma Public Utilities. The Plan consists of potential actions in Puget Sound and other areas in the State of Washington. A specific need exists in the Puget Sound area for balance between east-west transmission capacity and the increasing demand to import power generated east of the Cascades. At certain times of the year, there is more demand for power than the electric system can supply in the Puget Sound area. This high demand, called peak demand, occurs during the winter months when unusually cold weather increases electricity use for heating. The existing power system can supply enough power if no emergencies occur. However, during emergencies, the system will not operate properly. As demand grows, the system becomes more strained. To meet demand, the rate of growth of demand must be reduced or the ability to serve the demand must be increased, or both. The plan to balance Puget Sound`s power demand and supply has these purposes: The plan should define a set of actions that would accommodate ten years of load growth (1994--2003). Federal and State environmental quality requirements should be met. The plan should be consistent with the plans of the Northwest Power Planning Council. The plan should serve as a consensus guideline for coordinated utility action. The plan should be flexible to accommodate uncertainties and differing utility needs. The plan should balance environmental impacts and economic costs. The plan should provide electric system reliability consistent with customer expectations. 29 figs., 24 tabs.

  5. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  6. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  7. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  8. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  9. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  10. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  11. Broadband transmission noise reduction of smart panels featuring piezoelectric shunt circuits and sound-absorbing material.

    Science.gov (United States)

    Kim, Jaehwan; Lee, Joong-Kuen

    2002-09-01

    The possibility of a broadband noise reduction of piezoelectric smart panels is experimentally studied. A piezoelectric smart panel is basically a plate structure on which piezoelectric patches with electrical shunt circuits are mounted and sound-absorbing material is bonded on the surface of the structure. Sound-absorbing material can absorb the sound transmitted at the midfrequency region effectively while the use of piezoelectric shunt damping can reduce the transmission at resonance frequencies of the panel structure. To be able to reduce the sound transmission at low panel resonance frequencies, piezoelectric damping using the measured electrical impedance model is adopted. A resonant shunt circuit for piezoelectric shunt damping is composed of resistor and inductor in series, and they are determined by maximizing the dissipated energy through the circuit. The transmitted noise-reduction performance of smart panels is tested in an acoustic tunnel. The tunnel is a square cross-sectional tube and a loudspeaker is mounted at one side of the tube as a sound source. Panels are mounted in the middle of the tunnel and the transmitted sound pressure across panels is measured. When an absorbing material is bonded on a single plate, a remarkable transmitted noise reduction in the midfrequency region is observed except for the fundamental resonance frequency of the plate. By enabling the piezoelectric shunt damping, noise reduction is achieved at the resonance frequency as well. Piezoelectric smart panels incorporating passive absorbing material and piezoelectric shunt damping is a promising technology for noise reduction over a broadband of frequencies.

  12. 21 CFR 876.4590 - Interlocking urethral sound.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...

  13. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  14. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  15. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    Science.gov (United States)

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  16. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  17. Sound field simulation and acoustic animation in urban squares

    Science.gov (United States)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  18. Design of virtual three-dimensional instruments for sound control

    Science.gov (United States)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object

  19. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  20. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.