WorldWideScience

Sample records for sound playback experiment

  1. Behavioral responses by Icelandic White-Beaked Dolphins (Lagenorhynchus albirostris) to playback sounds

    DEFF Research Database (Denmark)

    Rasmussen, Marianne H.; Atem, Ana; Miller, Lee A.

    2016-01-01

    AbstractThe aim of this study was to investigate how wild white-beaked dolphins (Lagenorhynchus albirostris)respond to the playback of novel, anthropogenic sounds. We used amplitude-modulated tones and synthetic pulse-bursts. (Some authors in the literature use the term “burst pulse” meaning a bu...... a response and a change in the natural behavior of a marine mammal—in this case, wild white-beaked dolphins........ The estimated received levels for tonal signals were from 110 to 160 dB and for pulse-bursts were 153 to 166 dB re 1 μPa (peak-to-peak). Playback of a file with no signal served as a no sound control in all experiments. The animals responded to all acoustic signals with nine different behavioral responses: (1......) circling the array, (2) turning around and approaching the camera, (3)underwater tail slapping, (4)emitting bubbles, (5)turning their belly towards the set-up, (6) emitting pulse-bursts towards the loudspeaker, (7) an increase in swim speed, (8) a change in swim direction, and (9) jumping. A total of 157...

  2. Evaluation of a mixed-order planar and periphonic Ambisonics playback implementation

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2011-01-01

    . In order to combine the benefits of 2D and 3D systems, a higher order 2D playback system can be mixed with a lower order 3D system. In the present study, a mixed-order Ambisonics playback system was realised by extending the spherical harmonics decomposition of a 3D sound field with additional horizontal......Planar (2D) and periphonic (3D) higher-order Ambisonics (HOA) playback systems are widely used in multi-channel audio applications. For a given Ambisonics order, 2D systems require far less loudspeakers and provide a larger spatial resolution but cannot naturally reproduce elevated sound sources...... components. The performance of the system was analysed by considering a small and a large loudspeaker setup, allowing for different combinations of 2D and 3D Ambisonics orders. An objective evaluation showed that the systems provided a high spatial resolution for horizontal sources while producing a smooth...

  3. Female harbor seal (Phoca vitulina) behavioral response to playbacks of underwater male acoustic advertisement displays.

    Science.gov (United States)

    Matthews, Leanna P; Blades, Brittany; Parks, Susan E

    2018-01-01

    During the breeding season, male harbor seals ( Phoca vitulina ) make underwater acoustic displays using vocalizations known as roars. These roars have been shown to function in territory establishment in some breeding areas and have been hypothesized to be important for female choice, but the function of these sounds remains unresolved. This study consisted of a series of playback experiments in which captive female harbor seals were exposed to recordings of male roars to determine if females respond to recordings of male vocalizations and whether or not they respond differently to roars from categories with different acoustic characteristics. The categories included roars with characteristics of dominant males (longest duration, lowest frequency), subordinate males (shortest duration, highest frequency), combinations of call parameters from dominant and subordinate males (long duration, high frequency and short duration, low frequency), and control playbacks of water noise and water noise with tonal signals in the same frequency range as male signals. Results indicate that overall females have a significantly higher level of response to playbacks that imitate male vocalizations when compared to control playbacks of water noise. Specifically, there was a higher level of response to playbacks representing dominant male vocalization when compared to the control playbacks. For most individuals, there was a greater response to playbacks representing dominant male vocalizations compared to playbacks representing subordinate male vocalizations; however, there was no statistical difference between those two playback types. Additionally, there was no difference between the playbacks of call parameter combinations and the controls. Investigating female preference for male harbor seal vocalizations is a critical step in understanding the harbor seal mating system and further studies expanding on this captive study will help shed light on this important issue.

  4. Sounds scary? Lack of habituation following the presentation of novel sounds.

    Directory of Open Access Journals (Sweden)

    Tine A Biedenweg

    Full Text Available BACKGROUND: Animals typically show less habituation to biologically meaningful sounds than to novel signals. We might therefore expect that acoustic deterrents should be based on natural sounds. METHODOLOGY: We investigated responses by western grey kangaroos (Macropus fulignosus towards playback of natural sounds (alarm foot stomps and Australian raven (Corvus coronoides calls and artificial sounds (faux snake hiss and bull whip crack. We then increased rate of presentation to examine whether animals would habituate. Finally, we varied frequency of playback to investigate optimal rates of delivery. PRINCIPAL FINDINGS: Nine behaviors clustered into five Principal Components. PC factors 1 and 2 (animals alert or looking, or hopping and moving out of area accounted for 36% of variance. PC factor 3 (eating cessation, taking flight, movement out of area accounted for 13% of variance. Factors 4 and 5 (relaxing, grooming and walking; 12 and 11% of variation, respectively discontinued upon playback. The whip crack was most evocative; eating was reduced from 75% of time spent prior to playback to 6% following playback (post alarm stomp: 32%, raven call: 49%, hiss: 75%. Additionally, 24% of individuals took flight and moved out of area (50 m radius in response to the whip crack (foot stomp: 0%, raven call: 8% and 4%, hiss: 6%. Increasing rate of presentation (12x/min ×2 min caused 71% of animals to move out of the area. CONCLUSIONS/SIGNIFICANCE: The bull whip crack, an artificial sound, was as effective as the alarm stomp at eliciting aversive behaviors. Kangaroos did not fully habituate despite hearing the signal up to 20x/min. Highest rates of playback did not elicit the greatest responses, suggesting that 'more is not always better'. Ultimately, by utilizing both artificial and biological sounds, predictability may be masked or offset, so that habituation is delayed and more effective deterrents may be produced.

  5. Simulated birdwatchers' playback affects the behavior of two tropical birds.

    Science.gov (United States)

    Harris, J Berton C; Haskell, David G

    2013-01-01

    Although recreational birdwatchers may benefit conservation by generating interest in birds, they may also have negative effects. One such potentially negative impact is the widespread use of recorded vocalizations, or "playback," to attract birds of interest, including range-restricted and threatened species. Although playback has been widely used to test hypotheses about the evolution of behavior, no peer-reviewed study has examined the impacts of playback in a birdwatching context on avian behavior. We studied the effects of simulated birdwatchers' playback on the vocal behavior of Plain-tailed Wrens Thryothorus euophrys and Rufous Antpittas Grallaria rufula in Ecuador. Study species' vocal behavior was monitored for an hour after playing either a single bout of five minutes of song or a control treatment of background noise. We also studied the effects of daily five minute playback on five groups of wrens over 20 days. In single bout experiments, antpittas made more vocalizations of all types, except for trills, after playback compared to controls. Wrens sang more duets after playback, but did not produce more contact calls. In repeated playback experiments, wren responses were strong at first, but hardly detectable by day 12. During the study, one study group built a nest, apparently unperturbed, near a playback site. The playback-induced habituation and changes in vocal behavior we observed suggest that scientists should consider birdwatching activity when selecting research sites so that results are not biased by birdwatchers' playback. Increased vocalizations after playback could be interpreted as a negative effect of playback if birds expend energy, become stressed, or divert time from other activities. In contrast, the habituation we documented suggests that frequent, regular birdwatchers' playback may have minor effects on wren behavior.

  6. A Study on the Development of Playback Control Software for Mark5B VSI System

    Directory of Open Access Journals (Sweden)

    S. J. Oh

    2010-06-01

    Full Text Available We developed the playback control software for a high-speed playback system which is a component of the Korea-Japan Joint VLBI Correlator (KJJVC. The Mark5B system, which is a recorder and playback system used in the Korean VLBI Network (KVN, has two kinds of operation mode. That is to say, the station unit (SU mode, which is for the present Mark4 system, and the VSI mode, which is for the new VLBI standard interface (VSI system. The software for SU is already developed and widely used in the Mark4 type VLBI system, but the software for VSI has only been developed for recording. The new VLBI system is designed with a VSI interface for compatibility between different systems. Therefore, the playback control software development of the VSI mode is needed for KVN. In this work, we developed the playback control software of the Mark5B VSI mode. The developed playback control software consists of an application part for data playing back, a data input/output part for the VSI board, a module for the StreamStor RAID board, and a user interface part, including an observation time control part. To verify the performance of developed playback control software, the playback and correlation experiments were performed using the real observation data in Mark5B system and KJJVC. To check the observation time control, the data playback experiment was performed between the Mark5B and Raw VLBI Data Buffer (RVDB systems. Through the experimental results, we confirmed the performance of developed playback control software in the Mark5B VSI mode.

  7. Lion, ungulate, and visitor reactions to playbacks of lion roars at Zoo Atlanta.

    Science.gov (United States)

    Kelling, Angela S; Allard, Stephanie M; Kelling, Nicholas J; Sandhaus, Estelle A; Maple, Terry L

    2012-01-01

    Felids in captivity are often inactive and elusive in zoos, leading to a frustrating visitor experience. Eight roars were recorded from an adult male lion and played back over speakers as auditory enrichment to benefit the lions while simultaneously enhancing the zoo visitor experience. In addition, ungulates in an adjacent exhibit were observed to ensure that the novel location and increased frequency of roars did not lead to a stress or fear response. The male lion in this study roared more in the playback phase than in the baseline phases while not increasing any behaviors that would indicate compromised welfare. In addition, zoo visitors remained at the lion exhibit longer during playback. The nearby ungulates never exhibited any reactions stronger than orienting to playbacks, identical to their reactions to live roars. Therefore, naturalistic playbacks of lion roars are a potential form of auditory enrichment that leads to more instances of live lion roars and enhances the visitor experience without increasing the stress levels of nearby ungulates or the lion themselves, who might interpret the roar as that of an intruder.

  8. The roles of vocal and visual interactions in social learning zebra finches: A video playback experiment.

    Science.gov (United States)

    Guillette, Lauren M; Healy, Susan D

    2017-06-01

    The transmission of information from an experienced demonstrator to a naïve observer often depends on characteristics of the demonstrator, such as familiarity, success or dominance status. Whether or not the demonstrator pays attention to and/or interacts with the observer may also affect social information acquisition or use by the observer. Here we used a video-demonstrator paradigm first to test whether video demonstrators have the same effect as using live demonstrators in zebra finches, and second, to test the importance of visual and vocal interactions between the demonstrator and observer on social information use by the observer. We found that female zebra finches copied novel food choices of male demonstrators they saw via live-streaming video while they did not consistently copy from the demonstrators when they were seen in playbacks of the same videos. Although naive observers copied in the absence of vocalizations by the demonstrator, as they copied from playback of videos with the sound off, females did not copy where there was a mis-match between the visual information provided by the video and vocal information from a live male that was out of sight. Taken together these results suggest that video demonstration is a useful methodology for testing social information transfer, at least in a foraging context, but more importantly, that social information use varies according to the vocal interactions, or lack thereof, between the observer and the demonstrator. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. You talkin' to me? Interactive playback is a powerful yet underused tool in animal communication research.

    Science.gov (United States)

    King, Stephanie L

    2015-07-01

    Over the years, playback experiments have helped further our understanding of the wonderful world of animal communication. They have provided fundamental insights into animal behaviour and the function of communicative signals in numerous taxa. As important as these experiments are, however, there is strong evidence to suggest that the information conveyed in a signal may only have value when presented interactively. By their very nature, signalling exchanges are interactive and therefore, an interactive playback design is a powerful tool for examining the function of such exchanges. While researchers working on frog and songbird vocal interactions have long championed interactive playback, it remains surprisingly underused across other taxa. The interactive playback approach is not limited to studies of acoustic signalling, but can be applied to other sensory modalities, including visual, chemical and electrical communication. Here, I discuss interactive playback as a potent yet underused technique in the field of animal behaviour. I present a concise review of studies that have used interactive playback thus far, describe how it can be applied, and discuss its limitations and challenges. My hope is that this review will result in more scientists applying this innovative technique to their own study subjects, as a means of furthering our understanding of the function of signalling interactions in animal communication systems. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. Feasibility of video codec algorithms for software-only playback

    Science.gov (United States)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  11. Anthropogenic noise playback impairs embryonic development and increases mortality in a marine invertebrate

    Science.gov (United States)

    Nedelec, Sophie L.; Radford, Andrew N.; Simpson, Stephen D.; Nedelec, Brendan; Lecchini, David; Mills, Suzanne C.

    2014-07-01

    Human activities can create noise pollution and there is increasing international concern about how this may impact wildlife. There is evidence that anthropogenic noise may have detrimental effects on behaviour and physiology in many species but there are few examples of experiments showing how fitness may be directly affected. Here we use a split-brood, counterbalanced, field experiment to investigate the effect of repeated boat-noise playback during early life on the development and survival of a marine invertebrate, the sea hare Stylocheilus striatus at Moorea Island (French Polynesia). We found that exposure to boat-noise playback, compared to ambient-noise playback, reduced successful development of embryos by 21% and additionally increased mortality of recently hatched larvae by 22%. Our work, on an understudied but ecologically and socio-economically important taxon, demonstrates that anthropogenic noise can affect individual fitness. Fitness costs early in life have a fundamental influence on population dynamics and resilience, with potential implications for community structure and function.

  12. The influence of motion quality on responses towards video playback stimuli

    Directory of Open Access Journals (Sweden)

    Emma Ware

    2015-07-01

    Full Text Available Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR, are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.

  13. Sound field control for a low-frequency test facility

    DEFF Research Database (Denmark)

    Pedersen, Christian Sejer; Møller, Henrik

    2013-01-01

    The two largest problems in controlling the reproduction of low-frequency sound for psychoacoustic experiments is the effect of the room due to standing waves and the relatively large sound pressure levels needed. Anechoic rooms are limited downward in frequency and distortion may be a problem even...... at moderate levels, while pressure-field playback can give higher sound pressures but is limited upwards in frequency. A new solution that addresses both problems has been implemented in the laboratory of Acoustics, Aalborg University. The solution uses one wall with 20 loudspeakers to generate a plane wave...... that is actively absorbed when it reaches the 20 loudspeakers on the opposing wall. This gives a homogeneous sound field in the majority of the room with a flat frequency response in the frequency range 2-300 Hz. The lowest frequencies are limited to sound pressure levels in the order of 95 dB. If larger levels...

  14. Parallels between playbacks and Pleistocene tar seeps suggest sociality in an extinct sabretooth cat, Smilodon

    OpenAIRE

    Carbone, Chris; Maddox, Tom; Funston, Paul J.; Mills, Michael G.L.; Grether, Gregory F.; Van Valkenburgh, Blaire

    2008-01-01

    Inferences concerning the lives of extinct animals are difficult to obtain from the fossil record. Here we present a novel approach to the study of extinct carnivores, using a comparison between fossil records (n=3324) found in Late Pleistocene tar seeps at Rancho La Brea in North America and counts (n=4491) from playback experiments used to estimate carnivore abundance in Africa. Playbacks and tar seep deposits represent competitive, potentially dangerous encounters where multiple predators ...

  15. Stridulatory sound-production and its function in females of the cicada Subpsaltria yangi.

    Directory of Open Access Journals (Sweden)

    Changqing Luo

    Full Text Available Acoustic behavior plays a crucial role in many aspects of cicada biology, such as reproduction and intrasexual competition. Although female sound production has been reported in some cicada species, acoustic behavior of female cicadas has received little attention. In cicada Subpsaltria yangi, the females possess a pair of unusually well-developed stridulatory organs. Here, sound production and its function in females of this remarkable cicada species were investigated. We revealed that the females could produce sounds by stridulatory mechanism during pair formation, and the sounds were able to elicit both acoustic and phonotactic responses from males. In addition, the forewings would strike the body during performing stridulatory sound-producing movements, which generated impact sounds. Acoustic playback experiments indicated that the impact sounds played no role in the behavioral context of pair formation. This study provides the first experimental evidence that females of a cicada species can generate sounds by stridulatory mechanism. We anticipate that our results will promote acoustic studies on females of other cicada species which also possess stridulatory system.

  16. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  17. An extended framework for adaptive playback-based video summarization

    Science.gov (United States)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  18. Playback system designed for X-Band SAR

    International Nuclear Information System (INIS)

    Yuquan, Liu; Changyong, Dou

    2014-01-01

    SAR(Synthetic Aperture Radar) has extensive application because it is daylight and weather independent. In particular, X-Band SAR strip map, designed by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, provides high ground resolution images, at the same time it has a large spatial coverage and a short acquisition time, so it is promising in multi-applications. When sudden disaster comes, the emergency situation acquires radar signal data and image as soon as possible, in order to take action to reduce loss and save lives in the first time. This paper summarizes a type of X-Band SAR playback processing system designed for disaster response and scientific needs. It describes SAR data workflow includes the payload data transmission and reception process. Playback processing system completes signal analysis on the original data, providing SAR level 0 products and quick image. Gigabit network promises radar signal transmission efficiency from recorder to calculation unit. Multi-thread parallel computing and ping pong operation can ensure computation speed. Through gigabit network, multi-thread parallel computing and ping pong operation, high speed data transmission and processing meet the SAR radar data playback real time requirement

  19. Playback system designed for X-Band SAR

    Science.gov (United States)

    Yuquan, Liu; Changyong, Dou

    2014-03-01

    SAR(Synthetic Aperture Radar) has extensive application because it is daylight and weather independent. In particular, X-Band SAR strip map, designed by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, provides high ground resolution images, at the same time it has a large spatial coverage and a short acquisition time, so it is promising in multi-applications. When sudden disaster comes, the emergency situation acquires radar signal data and image as soon as possible, in order to take action to reduce loss and save lives in the first time. This paper summarizes a type of X-Band SAR playback processing system designed for disaster response and scientific needs. It describes SAR data workflow includes the payload data transmission and reception process. Playback processing system completes signal analysis on the original data, providing SAR level 0 products and quick image. Gigabit network promises radar signal transmission efficiency from recorder to calculation unit. Multi-thread parallel computing and ping pong operation can ensure computation speed. Through gigabit network, multi-thread parallel computing and ping pong operation, high speed data transmission and processing meet the SAR radar data playback real time requirement.

  20. Notes on some experiments on the application of subtractive compensation to USGS seismic magnetic tape recording and playback systems

    Science.gov (United States)

    Eaton, Jerry P.

    1975-01-01

    The purpose of these experiments is to lay the groundwork for the implementation of subtractive compensation of the USGS seismic network tape playbacks utilizing the Develco model 6203 discriminators at a x1 playback speed. Although the Develco discriminators were designed for this application and a matching Develco compensation discriminator was purchased, effective use of this system for subtractive compensation has been blocked by the inadequate (frequency dependent) matching of the phase of the compensation signal to that of the data signal at the point compensation is carried out in the data discriminators. John Van Schaack has ameliorated the phase mismatch problem by an empirical alteration of the compensation discriminator input bandpass filter. We have selected a set (of eight) Develco discriminators and adjusted their compensation signal input levels to minimize spurious signals (noise) originating from tape speed irregularities. The sensitivity of the data discriminators was adjusted so that deviations of +125 Hz and -125 Hz produced output signals of +2.00 volts and -2.00 volts, respectively. The eight data discriminators are driven by a multiplex signal on a single tape track (subcarriers 680, 1020, 1360, 1700, 2040, 2380, 2720, and 3060 Hz). The Develco-supplied compensation discriminator requires an unmodulated 3125 Hz signal on a separate tape track.

  1. Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.

    Science.gov (United States)

    Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D

    2016-01-01

    To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.

  2. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Design of Efficient Sound Systems for Low Voltage Battery Driven Applications

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Oortgiesen, Rien; Knott, Arnold

    2016-01-01

    The efficiency of portable battery driven sound systems is crucial as it relates to both the playback time and cost of the system. This paper presents design considerations when designing such systems. This include loudspeaker and amplifier design. Using a low resistance voice coil realized...

  4. Automated interactive video playback for studies of animal communication.

    Science.gov (United States)

    Butkowski, Trisha; Yan, Wei; Gray, Aaron M; Cui, Rongfeng; Verzijden, Machteld N; Rosenthal, Gil G

    2011-02-09

    Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.

  5. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  6. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  7. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  8. Augmenting the Sound Experience at Music Festivals using Mobile Phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Stopczynski, Arkadiusz; Larsen, Jan

    2011-01-01

    In this paper we describe experiments carried out at the Nibe music festival in Denmark involving the use of mobile phones to augment the participants' sound experience at the concerts. The experiments involved N=19 test participants that used a mobile phone with a headset playing back sound...... “in-the-wild” experiments augmenting the sound experience at two concerts at this music festival....

  9. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  10. Language Experience Affects Grouping of Musical Instrument Sounds

    Science.gov (United States)

    Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…

  11. Long-time storage of song types in birds: evidence from interactive playbacks.

    Science.gov (United States)

    Geberzahn, Nicole; Hultsch, Henrike

    2003-05-22

    In studies of birdsong learning, imitation-based assays of stimulus memorization do not take into account that tutored song types may have been stored, but were not retrieved from memory. Such a 'silent' reservoir of song material could be used later in the bird's life, e.g. during vocal interactions. We examined this possibility in hand-reared nightingales during their second year. The males had been exposed to songs, both as fledglings and later, during their first full song period in an interactive playback design. Our design allowed us to compare the performance of imitations from the following categories: (i) songs only experienced during the early tutoring; (ii) songs experienced both during early tutoring and interactive playbacks; and (iii) novel songs experienced only during the simulated interactions. In their second year, birds imitated song types from each category, including those from categories (i) and (ii) which they had failed to imitate before. In addition, the performance of these song types was different (category (ii) > category (i)) and more pronounced than for category (iii) songs. Our results demonstrate 'silent' song storage in nightingales and point to a graded influence of the time and the social context of experience on subsequent vocal imitation.

  12. Motorboat noise impacts parental behaviour and offspring survival in a reef fish.

    Science.gov (United States)

    Nedelec, Sophie L; Radford, Andrew N; Pearl, Leanne; Nedelec, Brendan; McCormick, Mark I; Meekan, Mark G; Simpson, Stephen D

    2017-06-14

    Anthropogenic noise is a pollutant of international concern, with mounting evidence of disturbance and impacts on animal behaviour and physiology. However, empirical studies measuring survival consequences are rare. We use a field experiment to investigate how repeated motorboat-noise playback affects parental behaviour and offspring survival in the spiny chromis ( Acanthochromis polyacanthus ), a brooding coral reef fish. Repeated observations were made for 12 days at 38 natural nests with broods of young. Exposure to motorboat-noise playback compared to ambient-sound playback increased defensive acts, and reduced both feeding and offspring interactions by brood-guarding males. Anthropogenic noise did not affect the growth of developing offspring, but reduced the likelihood of offspring survival; while offspring survived at all 19 nests exposed to ambient-sound playback, six of the 19 nests exposed to motorboat-noise playback suffered complete brood mortality. Our study, providing field-based experimental evidence of the consequences of anthropogenic noise, suggests potential fitness consequences of this global pollutant. © 2017 The Authors.

  13. Reproduction of nearby sound sources using higher-order ambisonics with practical loudspeaker arrays

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2012-01-01

    the impact of two existing and a new proposed regularization function on the reproduced sound fields and on the main auditory cue for nearby sound sources outside the median plane, i.e, low-frequencies interaural level differences (ILDs). The proposed regularization function led to a better reproduction......In order to reproduce nearby sound sources with distant loudspeakers to a single listener, the near field compensated (NFC) method for higher-order Ambisonics (HOA) has been previously proposed. In practical realization, this method requires the use of regularization functions. This study analyzes...... of point source sound fields compared to existing regularization functions for NFC-HOA. Measurements in realistic playback environments showed that, for very close sources, significant ILDs for frequencies above about 250 Hz can be reproduced with NFC-HOA and the proposed regularization function whereas...

  14. A Dynamic Programming Solution for Energy-Optimal Video Playback on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Minseok Song

    2016-01-01

    Full Text Available Due to the development of mobile technology and wide availability of smartphones, the Internet of Things (IoT starts to handle high volumes of video data to facilitate multimedia-based services, which requires energy-efficient video playback. In video playback, frames have to be decoded and rendered at high playback rate, increasing the computation cost on the CPU. To save the CPU power, dynamic voltage and frequency scaling (DVFS dynamically adjusts the operating voltage of the processor along with frequency, in which appropriate selection of frequency on power could achieve a balance between performance and power. We present a decoding model that allows buffering frames to let the CPU run at low frequency and then propose an algorithm that determines the CPU frequency needed to decode each frame in a video, with the aim of minimizing power consumption while meeting buffer size and deadline constraints, using a dynamic programming technique. We finally extend this algorithm to optimize CPU frequencies over a short sequence of frames, producing a practical method of reducing the energy required for video decoding. Experimental results show a system-wide reduction in energy of 27%, compared with a processor running at full speed.

  15. Design of digital voice storage and playback system

    Science.gov (United States)

    Tang, Chao

    2018-03-01

    Based on STC89C52 chip, this paper presents a single chip microcomputer minimum system, which is used to realize the logic control of digital speech storage and playback system. Compared with the traditional tape voice recording system, the system has advantages of small size, low power consumption, The effective solution of traditional voice recording system is limited in the use of electronic and information processing.

  16. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  17. Acceleration recorder and playback module

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1994-11-01

    The present invention is directed to methods and apparatus relating to an accelerometer electrical signal recorder and playback module. The recorder module may be manufactured in lightweight configuration and includes analog memory components to store data. Signal conditioning circuitry is incorporated into the module so that signals may be connected directly from the accelerometer to the recorder module. A battery pack may be included for powering both the module and the accelerometer. Timing circuitry is included to control the time duration within which data is recorded or played back so as to avoid overloading the analog memory components. Multiple accelerometer signal recordings may be taken simultaneously without analog to digital circuits, multiplexing circuitry or software to compensate for the effects of multiplexing the signals.

  18. Flights of fear: a mechanical wing whistle sounds the alarm in a flocking bird.

    Science.gov (United States)

    Hingee, Mae; Magrath, Robert D

    2009-12-07

    Animals often form groups to increase collective vigilance and allow early detection of predators, but this benefit of sociality relies on rapid transfer of information. Among birds, alarm calls are not present in all species, while other proposed mechanisms of information transfer are inefficient. We tested whether wing sounds can encode reliable information on danger. Individuals taking off in alarm fly more quickly or ascend more steeply, so may produce different sounds in alarmed than in routine flight, which then act as reliable cues of alarm, or honest 'index' signals in which a signal's meaning is associated with its method of production. We show that crested pigeons, Ocyphaps lophotes, which have modified flight feathers, produce distinct wing 'whistles' in alarmed flight, and that individuals take off in alarm only after playback of alarmed whistles. Furthermore, amplitude-manipulated playbacks showed that response depends on whistle structure, such as tempo, not simply amplitude. We believe this is the first demonstration that flight noise can send information about alarm, and suggest that take-off noise could provide a cue of alarm in many flocking species, with feather modification evolving specifically to signal alarm in some. Similar reliable cues or index signals could occur in other animals.

  19. Capture and playback synchronization in video conferencing

    Science.gov (United States)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  20. Efficiency of playback for assessing the occurrence of five bird species in Brazilian Atlantic Forest fragments

    Directory of Open Access Journals (Sweden)

    Danilo Boscolo

    2006-12-01

    Full Text Available Playback of bird songs is a useful technique for species detection; however, this method is usually not standardized. We tested playback efficiency for five Atlantic Forest birds (White-browed Warbler Basileuterus leucoblepharus, Giant Antshrike Batara cinerea, Swallow-tailed Manakin Chiroxiphia caudata, Whiteshouldered Fire-eye Pyriglena leucoptera and Surucua Trogon Trogon surrucura for different time of the day, season of the year and species abundance at the Morro Grande Forest Reserve (South-eastern Brazil and at thirteen forest fragments in a nearby landscape. Vocalizations were broadcasted monthly at sunrise, noon and sunset, during one year. For B. leucoblepharus, C. caudata and T. surrucura, sunrise and noon were more efficient than sunset. Batara cinerea presented higher efficiency from July to October. Playback expanded the favourable period for avifaunal surveys in tropical forest, usually restricted to early morning in the breeding season. The playback was efficient in detecting the presence of all species when the abundance was not too low. But only B. leucoblepharus and T. surrucura showed abundance values significantly related to this efficiency. The present study provided a precise indication of the best daily and seasonal periods and a confidence interval to maximize the efficiency of playback to detect the occurrence of these forest species.A técnica de play-back é muito útil para a detecção de aves, mas este método geralmente não é padronizado. Sua eficiência em atestar a ocorrência de cinco espécies de aves da Mata Atlântica (Pula-pula-assobiador Basileuterus leucoblepharus, Batará Batara cinerea, Tangará Chiroxiphia caudata, Olho-de-fogo Pyriglena leucoptera e Surucuá-de-barriga-vermelha Trogon surrucura foi analisada de acordo com o horário do dia, estação do ano e abundância das espécies na Reserva Florestal do Morro Grande (São Paulo, Brasil e em treze fragmentos florestais de uma paisagem adjacente

  1. Evidence for habituation of the irrelevant-sound effect on serial recall.

    Science.gov (United States)

    Röer, Jan P; Bell, Raoul; Buchner, Axel

    2014-05-01

    Working memory theories make opposing predictions as to whether the disruptive effect of task-irrelevant sound on serial recall should be attenuated after repeated exposure to the auditory distractors. Although evidence of habituation has emerged after a passive listening phase, previous attempts to observe habituation to to-be ignored distractors on a trial-by-trial basis have proven to be fruitless. With the present study, we suggest that habituation to auditory distractors occurs, but has often been overlooked because past attempts to measure habituation in the irrelevant-sound paradigm were not sensitive enough. In a series of four experiments, the disruptive effects of to-be-ignored speech and music relative to a quiet control condition were markedly reduced after eight repetitions, regardless of whether trials were presented in blocks (Exp. 1) or in a random order (Exp. 2). The auditory distractor's playback direction (forward, backward) had no effect (Exp. 3). The same results were obtained when the auditory distractors were only presented in a retention interval after the presentation of the to-be-remembered items (Exp. 4). This pattern is only consistent with theoretical accounts that allow for attentional processes to interfere with the maintenance of information in working memory.

  2. Mourning dove ( Zenaida macroura) wing-whistles may contain threat-related information for con- and hetero-specifics

    Science.gov (United States)

    Coleman, Seth W.

    2008-10-01

    Distinct acoustic whistles are associated with the wing-beats of many doves, and are especially noticeable when doves ascend from the ground when startled. I thus hypothesized that these sounds may be used by flock-mates as cues of potential danger. To test this hypothesis, I compared the responses of mourning doves ( Zenaida macroura), northern cardinals ( Cardinalis cardinalis), and house sparrows ( Passer domesticus) to audio playbacks of dove ‘startle wing-whistles’, cardinal alarm calls, dove ‘nonstartle wing-whistles’, and sparrow ‘social chatter’. Following playbacks of startle wing-whistles and alarm calls, conspecifics and heterospecifics startled and increased vigilance more than after playbacks of other sounds. Also, the latency to return to feeding was greater following playbacks of startle wing-whistles and alarm calls than following playbacks of other sounds. These results suggest that both conspecifics and heterospecifics may attend to dove wing-whistles in decisions related to antipredator behaviors. Whether the sounds of dove wing-whistles are intentionally produced signals warrants further testing.

  3. SCORE - Sounding-rocket Coronagraphic Experiment

    Science.gov (United States)

    Fineschi, Silvano; Moses, Dan; Romoli, Marco

    The Sounding-rocket Coronagraphic Experiment - SCORE - is a The Sounding-rocket Coronagraphic Experiment - SCORE - is a coronagraph for multi-wavelength imaging of the coronal Lyman-alpha lines, HeII 30.4 nm and HI 121.6 nm, and for the broad.band visible-light emission of the polarized K-corona. SCORE has flown successfully in 2009 acquiring the first images of the HeII line-emission from the extended corona. The simultaneous observation of the coronal Lyman-alpha HI 121.6 nm, has allowed the first determination of the absolute helium abundance in the extended corona. This presentation will describe the lesson learned from the first flight and will illustrate the preparations and the science perspectives for the second re-flight approved by NASA and scheduled for 2016. The SCORE optical design is flexible enough to be able to accommodate different experimental configurations with minor modifications. This presentation will describe one of such configurations that could include a polarimeter for the observation the expected Hanle effect in the coronal Lyman-alpha HI line. The linear polarization by resonance scattering of coronal permitted line-emission in the ultraviolet (UV) can be modified by magnetic fields through the Hanle effect. Thus, space-based UV spectro-polarimetry would provide an additional new tool for the diagnostics of coronal magnetism.

  4. PandaEPL: a library for programming spatial navigation experiments.

    Science.gov (United States)

    Solway, Alec; Miller, Jonathan F; Kahana, Michael J

    2013-12-01

    Recent advances in neuroimaging and neural recording techniques have enabled researchers to make significant progress in understanding the neural mechanisms underlying human spatial navigation. Because these techniques generally require participants to remain stationary, computer-generated virtual environments are used. We introduce PandaEPL, a programming library for the Python language designed to simplify the creation of computer-controlled spatial-navigation experiments. PandaEPL is built on top of Panda3D, a modern open-source game engine. It allows users to construct three-dimensional environments that participants can navigate from a first-person perspective. Sound playback and recording and also joystick support are provided through the use of additional optional libraries. PandaEPL also handles many tasks common to all cognitive experiments, including managing configuration files, logging all internal and participant-generated events, and keeping track of the experiment state. We describe how PandaEPL compares with other software for building spatial-navigation experiments and walk the reader through the process of creating a fully functional experiment.

  5. Analysis of the HVAC system's sound quality using the design of experiments

    International Nuclear Information System (INIS)

    Park, Sang Gil; Sim, Hyun Jin; Yoon, Ji Hyun; Jeong, Jae Eun; Choi, Byoung Jae; Oh, Jae Eung

    2009-01-01

    Human hearing is very sensitive to sound, so a subjective index of sound quality is required. Each situation of sound evaluation is composed of Sound Quality (SQ) metrics. When substituting the level of one frequency band, we could not see the tendency of substitution at the whole frequency band during SQ evaluation. In this study, the Design of Experiments (DOE) is used to analyze noise from an automotive Heating, Ventilating, and Air Conditioning (HVAC) system. The frequency domain is divided into 12 equal parts, and each level of the domain is given an increase or decrease due to the change in frequency band based on the 'loud' and 'sharp' sound of the SQ analyzed. By using DOE, the number of tests is effectively reduced by the number of experiments, and the main result is a solution at each band. SQ in terms of the 'loud' and 'sharp' sound at each band, the change in band (increase or decrease in sound pressure) or no change in band will have the most effect on the identifiable characteristics of SQ. This will enable us to select the objective frequency band. Through the results obtained, the physical level changes in arbitrary frequency domain sensitivity can be determined

  6. Sperm whales reduce foraging effort during exposure to 1-2 kHz sonar and killer whale sounds.

    Science.gov (United States)

    Isojunno, Saana; Cure, Charlotte; Kvadsheim, Petter Helgevold; Lam, Frans-Peter Alexander; Tyack, Peter Lloyd; Wensveen, Paul Jacobus; Miller, Patrick James O'Malley

    2016-01-01

    The time and energetic costs of behavioral responses to incidental and experimental sonar exposures, as well as control stimuli, were quantified using hidden state analysis of time series of acoustic and movement data recorded by tags (DTAG) attached to 12 sperm whales (Physeter macrocephalus) using suction cups. Behavioral state transition modeling showed that tagged whales switched to a non-foraging, non-resting state during both experimental transmissions of low-frequency active sonar from an approaching vessel (LFAS; 1-2 kHz, source level 214 dB re 1 µPa m, four tag records) and playbacks of potential predator (killer whale, Orcinus orca) sounds broadcast at naturally occurring sound levels as a positive control from a drifting boat (five tag records). Time spent in foraging states and the probability of prey capture attempts were reduced during these two types of exposures with little change in overall locomotion activity, suggesting an effect on energy intake with no immediate compensation. Whales switched to the active non-foraging state over received sound pressure levels of 131-165 dB re 1 µPa during LFAS exposure. In contrast, no changes in foraging behavior were detected in response to experimental negative controls (no-sonar ship approach or noise control playback) or to experimental medium-frequency active sonar exposures (MFAS; 6-7 kHz, source level 199 re 1 µPa m, received sound pressure level [SPL] = 73-158 dB re 1 µPa). Similarly, there was no reduction in foraging effort for three whales exposed to incidental, unidentified 4.7-5.1 kHz sonar signals received at lower levels (SPL = 89-133 dB re 1 µPa). These results demonstrate that similar to predation risk, exposure to sonar can affect functional behaviors, and indicate that increased perception of risk with higher source level or lower frequency may modulate how sperm whales respond to anthropogenic sound.

  7. Experiments on the attenuation of third sound in saturated superfluid helium films

    International Nuclear Information System (INIS)

    Telschow, K.L.; Galkiewicz, R.K.; Hallock, R.B.

    1976-01-01

    Upper limits of the attenuation of third sound in saturated superfluid 4 He films have been measured in three separate experiments. Observations at frequencies from 0.1 to 200 Hz indicate that the attenuation in these thick films is substantially lower than would be inferred from the only previous experiment done on saturated films. The third-sound velocity is observed to have the temperature dependence predicted by Bergman

  8. The Harley effect : Internal and external factors that facilitate positive experiences with product sounds

    NARCIS (Netherlands)

    Ozcan Vieira, E.

    2014-01-01

    Everyday activities are laden with emotional experiences involving sound. Our interactions with products (shavers, hairdryers, electric drills) often cause sounds that are typically unpleasant to the ear. Yet, we may get excited with the sound of an accelerating Harley Davidson because the rumbling

  9. Experiences with sound insulating open windows in traffic noise exposed housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2015-01-01

    windows are open, not least to reduce sleep disturbance. Unfortunately, such window solutions are complicated and expensive and practical experience limited. Nevertheless, they have been included in some Danish projects. To support further development and use, experience from seven field cases......Sound insulating windows are widely used in traffic noise exposed residential areas to reduce indoor noise levels to acceptable levels. However, such windows are typically only designed to provide sound insulation in closed position, and many people prefer open windows parts of time for ventilation...... purposes, including during night, or simply because it’s a good feeling to have windows open to be in contact with the surroundings. High noise exposure can lead to adverse effects on comfort and health, and thus, there is a need for sound insulating open windows to reduce noise exposure in homes, when...

  10. Message Brokering Evaluation for Live Spacecraft Telemetry Monitoring, Recorded Playback, and Analysis

    Science.gov (United States)

    Lee, Daren; Pomerantz, Marc

    2015-01-01

    Live monitoring and post-flight analysis of telemetry data play a vital role in the development, diagnosis, and deployment of components of a space flight mission. Requirements for such a system include low end-to-end latency between data producers and visualizers, preserved ordering of messages, data stream archiving with random access playback, and real-time creation of derived data streams. We evaluate the RabbitMQ and Kafka message brokering systems, on how well they can enable a real-time, scalable, and robust telemetry framework that delivers telemetry data to multiple clients across heterogeneous platforms and flight projects. In our experiments using an actively developed robotic arm testbed, Kafka yielded a much higher message throughput rate and a consistent publishing rate across the number of topics and consumers. Consumer message rates were consistent across the number of topics but can exhibit bursty behavior with an increase in the contention for a single topic partition with increasing number of consumers.

  11. Perception of male caller identity in Koalas (Phascolarctos cinereus): acoustic analysis and playback experiments.

    Science.gov (United States)

    Charlton, Benjamin D; Ellis, William A H; McKinnon, Allan J; Brumm, Jacqui; Nilsson, Karen; Fitch, W Tecumseh

    2011-01-01

    The ability to signal individual identity using vocal signals and distinguish between conspecifics based on vocal cues is important in several mammal species. Furthermore, it can be important for receivers to differentiate between callers in reproductive contexts. In this study, we used acoustic analyses to determine whether male koala bellows are individually distinctive and to investigate the relative importance of different acoustic features for coding individuality. We then used a habituation-discrimination paradigm to investigate whether koalas discriminate between the bellow vocalisations of different male callers. Our results show that male koala bellows are highly individualized, and indicate that cues related to vocal tract filtering contribute the most to vocal identity. In addition, we found that male and female koalas habituated to the bellows of a specific male showed a significant dishabituation when they were presented with bellows from a novel male. The significant reduction in behavioural response to a final rehabituation playback shows this was not a chance rebound in response levels. Our findings indicate that male koala bellows are highly individually distinctive and that the identity of male callers is functionally relevant to male and female koalas during the breeding season. We go on to discuss the biological relevance of signalling identity in this species' sexual communication and the potential practical implications of our findings for acoustic monitoring of male population levels.

  12. Perception of male caller identity in Koalas (Phascolarctos cinereus: acoustic analysis and playback experiments.

    Directory of Open Access Journals (Sweden)

    Benjamin D Charlton

    Full Text Available The ability to signal individual identity using vocal signals and distinguish between conspecifics based on vocal cues is important in several mammal species. Furthermore, it can be important for receivers to differentiate between callers in reproductive contexts. In this study, we used acoustic analyses to determine whether male koala bellows are individually distinctive and to investigate the relative importance of different acoustic features for coding individuality. We then used a habituation-discrimination paradigm to investigate whether koalas discriminate between the bellow vocalisations of different male callers. Our results show that male koala bellows are highly individualized, and indicate that cues related to vocal tract filtering contribute the most to vocal identity. In addition, we found that male and female koalas habituated to the bellows of a specific male showed a significant dishabituation when they were presented with bellows from a novel male. The significant reduction in behavioural response to a final rehabituation playback shows this was not a chance rebound in response levels. Our findings indicate that male koala bellows are highly individually distinctive and that the identity of male callers is functionally relevant to male and female koalas during the breeding season. We go on to discuss the biological relevance of signalling identity in this species' sexual communication and the potential practical implications of our findings for acoustic monitoring of male population levels.

  13. Scientific Experiences Using Argentinean Sounding Rockets in Antarctica

    Science.gov (United States)

    Sánchez-Peña, Miguel

    2000-07-01

    Argentina in the sixties and seventies, had experience for developing and for using sounding rockets and payloads to perform scientific space experiments. Besides they have several bases in Antarctica with adequate premises and installations, also duly equipped aircrafts and trained crews to flight to the white continent. In February 1965, scientists and technical people from the "Instituto de Investigacion Aeronáutica y Espacial" (I.I.A.E.) with the cooperation of the Air Force and the Tucuman University, conducted the "Matienzo Operation" to measure X radiation and temperature in the upper atmosphere, using the Gamma Centauro rocket and also using big balloons. The people involved in the experience, the launcher, other material and equipment flew from the south tip of Argentina to the Matienzo base in Antarctica, in a C-47 aircraft equipped with skies an additional jet engine Marbore 2-C. Other experience was performed in 1975 in the "Marambio" Antartic Base, using the two stages solid propellent sounding rocket Castor, developed in Argentina. The payload was developed in cooperation with the Max Planck Institute of Germany. It consist of a special mixture including a shape charge to form a ionized cloud producing a jet of electrons travelling from Marambio base to the conjugate point in the Northern hemisphere. The cloud was observed by several ground stations in Argentina and also by a NASA aircraft with TV cameras, flying at East of New York. The objective of this experience was to study the electric and magnetic fields in altitude, the neutral points, the temperature and electrons profile. The objectives of both experiments were accomplished satisfactorily.

  14. Duet function in the yellow-naped amazon, Amazona auropalliata: evidence from playbacks of duets and solos.

    Science.gov (United States)

    Dahlin, Christine R; Wright, Timothy F

    2012-01-01

    The question of why animals participate in duets is an intriguing one, as many such displays appear to be more costly to produce than individual signals. Mated pairs of yellow-naped amazons, Amazona auropalliata, give duets on their nesting territories. We investigated the function of those duets with a playback experiment. We tested two hypotheses for the function of those duets: the joint territory defense hypothesis and the mate-guarding hypothesis, by presenting territorial pairs with three types of playback treatments: duets, male solos, and female solos. The joint territory defense hypothesis suggests that individuals engage in duets because they appear more threatening than solos and are thus more effective for the establishment, maintenance and/or defense of territories. It predicts that pairs will be coordinated in their response (pair members approach speakers and vocalize together) and will either respond more strongly (more calls and/or more movement) to duet treatments than to solo treatments, or respond equally to all treatments. Alternatively, the mate-guarding hypothesis suggests that individuals participate in duets because they allow them to acoustically guard their mate, and predicts uncoordinated responses by pairs, with weak responses to duet treatments and stronger responses by individuals to solos produced by the same sex. Yellow-naped amazon pairs responded to all treatments in an equivalently aggressive and coordinated manner by rapidly approaching speakers and vocalizing more. These responses generally support the joint territory defense hypothesis and further suggest that all intruders are viewed as a threat by resident pairs.

  15. The importance of recording and playback technique for assessment of annoyance

    DEFF Research Database (Denmark)

    Celik, Emine; Persson Waye, Kerstin; Møller, Henrik

    2005-01-01

    in perception related to annoyance, loudness and unpleasantness between monophonic recordings played back through a loudspeaker and binaural recordings played back via headphones and to evaluate whether a possible difference depends on temporal and frequency characteristics as well as spatial characteristics...... a loudspeaker and the binaural recordings were presented through both closed (circum-aural) and completely open (free of the ear) headphones. The results show that for all judgments (annoyance, loudness and unpleasantness), there was no significant main effect of recording and playback techniques; however...

  16. Play-back theatre, theatre laboratory, and role-playing: new tools in investigating the patient-physician relationship in the context of continuing medical education courses.

    Science.gov (United States)

    Piccoli, G; Rossetti, M; Dell'Olio, R; Perrotta, L; Mezza, E; Burdese, M; Maddalena, E; Bonetto, A; Jeantet, A; Segoloni, G P

    2005-06-01

    The aim of this study was to report on the validation of a role-playing approach, using play-back and theatre laboratory in the context of a continuing medical education (CME) course on predialysis and transplantation, to discuss the patient-physician relationship. The course was developed with the help of a theatre director. The role-playing 2-day course was designed to be highly interactive for a small group (15-20 participants), based on a core of case reports (dialysis, transplantation, and return to dialysis after graft failure). Two stages were included: play-back theatre in which experiences told by the participants were mimed by a group of actors, and theatre laboratory in which different aspects of voice and touch were explored. Opinions were gathered by an anonymous semistructured questionnaire completed by all participants. The course obtained a high score from The Ministry of Health (14 credits, 1 per teaching hour). The opinions of the 18 participants were highly positive; all liked the courses. Sixteen of 18 asked to repeat the experience. The strong emotional involvement was an advantage for 15 of 18, sharing emotional aspects of the profession for 10 of 18, and usefulness in clarifying opinions on "dark sides" of our profession for 10 of 18. The positive opinions recorded during this experience, the first experiment with a "psycho-theatrical approach" developed in a CME course in our country, suggest the benefit of implementing nonconventional, educational approaches in a multidisciplinary discussion of the patient-physician relationship in transplantation medicine.

  17. Space fireworks for upper atmospheric wind measurements by sounding rocket experiments

    Science.gov (United States)

    Yamamoto, M.

    2016-01-01

    Artificial meteor trains generated by chemical releases by using sounding rockets flown in upper atmosphere were successfully observed by multiple sites on ground and from an aircraft. We have started the rocket experiment campaign since 2007 and call it "Space fireworks" as it illuminates resonance scattering light from the released gas under sunlit/moonlit condition. By using this method, we have acquired a new technique to derive upper atmospheric wind profiles in twilight condition as well as in moonlit night and even in daytime. Magnificent artificial meteor train images with the surrounding physics and dynamics in the upper atmosphere where the meteors usually appear will be introduced by using fruitful results by the "Space firework" sounding rocket experiments in this decade.

  18. Using Sound-Taste Correspondences to Enhance the Subjective Value of Tasting Experiences

    Directory of Open Access Journals (Sweden)

    Felipe eReinoso Carvalho

    2015-09-01

    Full Text Available The soundscapes of those places where we eat and drink can influence our perception of taste. Here, we investigated whether contextual sound would enhance the subjective value of a tasting experience. The customers in a chocolate shop were invited to take part in an experiment in which they had to evaluate a chocolate’s taste while listening to an auditory stimulus. Four different conditions were presented to four different groups in a between-participants design. Envisioning a more ecological approach, a pre-recorded piece of popular music and the shop’s own soundscape were used as the sonic stimuli. The results revealed that not only did the customers report having a significantly better tasting experience when the sounds were presented as part of the food’s identity, but they were also willing to pay significantly more for the experience. The method outlined here paves a new approach to dealing with the design of multisensory tasting experiences, and gastronomic situations.

  19. Neuroplasticity beyond Sounds: Neural Adaptations Following Long-Term Musical Aesthetic Experiences

    Directory of Open Access Journals (Sweden)

    Mark Reybrouck

    2015-03-01

    Full Text Available Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active “agent” coping in highly individual ways with the sounds. The findings concerning the neural adaptations in musicians, following long-term exposure to music, are then reviewed by keeping in mind the distinct subprocesses of a musical aesthetic experience. We conclude that these neural adaptations can be conceived of as the immediate and lifelong interactions with multisensorial stimuli (having a predominant auditory component, which result in lasting changes of the internal state of the “agent”. In a continuous loop, these changes affect, in turn, the subprocesses involved in a musical aesthetic experience, towards the final goal of achieving better perceptual, motor and proprioceptive responses to the immediate demands of the sounding environment. The resulting neural adaptations in musicians closely depend on the duration of the interactions, the starting age, the involvement of attention, the amount of motor practice and the musical genre played.

  20. Flight trajectory recreation and playback system of aerial mission based on ossimplanet

    OpenAIRE

    Wu, Wu; Hu, Jiulin; Huang, Xiaofang; Chen, Huijie; Sun, Bo

    2014-01-01

    Recreation of flight trajectory is important among research areas. The design of a flight trajectory recreation and playback system is presented in this paper. Rather than transferring the flight data to diagram, graph and table, flight data is visualized on the 3D global of ossimPlanet. ossimPlanet is an open-source 3D global geo-spatial viewer and the system realization is based on analysis it. Users are allowed to choose their interested flight of aerial mission. The aerial ...

  1. Sound Descriptions of Haptic Experiences of Art Work by Deafblind Cochlear Implant Users

    Directory of Open Access Journals (Sweden)

    Riitta Lahtinen

    2018-05-01

    Full Text Available Deafblind persons’ perception and experiences are based on their residual auditive and visual senses, and touch. Their haptic exploration, through movements and orientation towards objects give blind persons direct, independent experience. Few studies explore the aesthetic experiences and appreciation of artefacts of deafblind people using cochlear implant (CI technology, and how they interpret and express their perceived aesthetic experience through another sensory modality. While speech recognition is studied extensively in this area, the aspect of auditive descriptions made by CI users are a less-studied domain. This present research intervention describes and analyses five different deafblind people sharing their interpretation of five statues vocally, using sounds and written descriptions based on their haptic explorations. The participants found new and multimodal ways of expressing their experiences, as well as re-experiencing them through technological aids. We also found that the CI users modify technology to better suit their personal needs. We conclude that CI technology in combination with self-made sound descriptions enhance memorization of haptic art experiences that can be re-called by the recording of the sound descriptions. This research expands the idea of auditive descriptions, and encourages user-produced descriptions as artistic supports to traditional linguistic, audio descriptions. These can be used to create personal auditive–haptic memory collections similar to how sighted create photo albums.

  2. Real-time dual-band haptic music player for mobile devices.

    Science.gov (United States)

    Hwang, Inwook; Lee, Hyeseon; Choi, Seungmoon

    2013-01-01

    We introduce a novel dual-band haptic music player for real-time simultaneous vibrotactile playback with music in mobile devices. Our haptic music player features a new miniature dual-mode actuator that can produce vibrations consisting of two principal frequencies and a real-time vibration generation algorithm that can extract vibration commands from a music file for dual-band playback (bass and treble). The algorithm uses a "haptic equalizer" and provides plausible sound-to-touch modality conversion based on human perceptual data. In addition, we present a user study carried out to evaluate the subjective performance (precision, harmony, fun, and preference) of the haptic music player, in comparison with the current practice of bass-band-only vibrotactile playback via a single-frequency voice-coil actuator. The evaluation results indicated that the new dual-band playback outperforms the bass-only rendering, also providing several insights for further improvements. The developed system and experimental findings have implications for improving the multimedia experience with mobile devices.

  3. Reconstruction of mechanically recorded sound from an edison cylinder using three dimensional non-contact optical surface metrology

    Energy Technology Data Exchange (ETDEWEB)

    Fadeyev, V.; Haber, C.; Maul, C.; McBride, J.W.; Golden, M.

    2004-04-20

    Audio information stored in the undulations of grooves in a medium such as a phonograph disc record or cylinder may be reconstructed, without contact, by measuring the groove shape using precision optical metrology methods and digital image processing. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image acquisition and analysis methods. The present work reports the first three dimensional reconstruction of mechanically recorded sound. The source material, a celluloid cylinder, was scanned using color coded confocal microscopy techniques and resulted in a faithful playback of the recorded information.

  4. Sound Performance – Experience and Event

    DEFF Research Database (Denmark)

    Holmboe, Rasmus

    . The present paper draws on examples from my ongoing PhD-project, which is connected to Museum of Contemporary Art in Roskilde, Denmark, where I curate a sub-programme at ACTS 2014 – a festival for performative arts. The aim is to investigate, how sound performance can be presented and represented - in real....... In itself – and as an artistic material – sound is always already process. It involves the listener in a situation that is both filled with elusive presence and one that evokes rooted memory. At the same time sound is bodily, social and historical. It propagates between individuals and objects, it creates...

  5. Assessment of annoyance, loudness and unpleasantness with different recording and playback techniques

    DEFF Research Database (Denmark)

    Celik, Emine; Persson Waye, Kerstin; Møller, Henrik

    2005-01-01

    if there is a difference in perception related to annoyance, loudness and unpleasantness between monophonic recordings played back through a loudspeaker and binaural recordings played back via headphones and to evaluate whether a possible difference depends on temporal and frequency characteristics as well as spatial...... through a loudspeaker and the binaural recordings were presented through both closed (circum-aural) and completely open (free of the ear) headphones. The results show that for all judgments (annoyance, loudness and unpleasantness), there was no significant main effect of recording and playback techniques...

  6. Engineering aspect of the microwave ionosphere nonlinear interaction experiment (MINIX) with a sounding rocket

    Science.gov (United States)

    Nagatomo, Makoto; Kaya, Nobuyuki; Matsumoto, Hiroshi

    The Microwave Ionosphere Nonlinear Interaction Experiment (MINIX) is a sounding rocket experiment to study possible effects of strong microwave fields in case it is used for energy transmission from the Solar Power Satellite (SPS) upon the Earth's atmosphere. Its secondary objective is to develop high power microwave technology for space use. Two rocket-borne magnetrons were used to emit 2.45 GHz microwave in order to make a simulated condition of power transmission from an SPS to a ground station. Sounding of the environment radiated by microwave was conducted by the diagnostic package onboard the daughter unit which was separated slowly from the mother unit. The main design drivers of this experiment were to build such high power equipments in a standard type of sounding rocket, to keep the cost within the budget and to perform a series of experiments without complete loss of the mission. The key technology for this experiment is a rocket-borne magnetron and high voltage converter. Location of position of the daughter unit relative to the mother unit was a difficult requirement for a spin-stabilized rocket. These problems were solved by application of such a low cost commercial products as a magnetron for microwave oven and a video tape recorder and camera.

  7. Effects of Playback Theatre on cognitive function and quality of life in older adults in Singapore: A preliminary study.

    Science.gov (United States)

    Chung, Krystal Shu Yi; Lee, Eleena Shi Lynn; Tan, Jia Qi; Teo, Dylan Jin Hao; Lee, Chris Ban Loong; Ee, Sharifah Rose; Sim, Sam Kim Yang; Chee, Chew Sim

    2018-03-01

    This study investigated the effects of Playback Theatre on older adults' cognitive function and well-being, specifically in the Singapore context. Eighteen healthy older adults, older than 50 years of age, participated in the study. Due to practical limitations, a single-group pre-post study design was adopted. Participants completed the outcome measures before and after the training program. There were six weekly sessions in total (about 1.5 hours, once weekly). Participants experienced a significant improvement in their emotional well-being after training. However, there were no significant changes in participants' cognitive function or health-related quality of life. Our results suggest that Playback Theatre as a community program has potential to improve the mental and emotional well-being of older people. © 2018 AJA Inc.

  8. Tinnitus and sound intolerance: evidence and experience of a Brazilian group.

    Science.gov (United States)

    Onishi, Ektor Tsuneo; Coelho, Cláudia Couto de Barros; Oiticica, Jeanne; Figueiredo, Ricardo Rodrigues; Guimarães, Rita de Cassia Cassou; Sanchez, Tanit Ganz; Gürtler, Adriana Lima; Venosa, Alessandra Ramos; Sampaio, André Luiz Lopes; Azevedo, Andreia Aparecida; Pires, Anna Paula Batista de Ávila; Barros, Bruno Borges de Carvalho; Oliveira, Carlos Augusto Costa Pires de; Saba, Clarice; Yonamine, Fernando Kaoru; Medeiros, Ítalo Roberto Torres de; Rosito, Letícia Petersen Schmidt; Rates, Marcelo José Abras; Kii, Márcia Akemi; Fávero, Mariana Lopes; Santos, Mônica Alcantara de Oliveira; Person, Osmar Clayton; Ciminelli, Patrícia; Marcondes, Renata de Almeida; Moreira, Ronaldo Kennedy de Paula; Torres, Sandro de Menezes Santos

    Tinnitus and sound intolerance are frequent and subjective complaints that may have an impact on a patient's quality of life. To present a review of the salient points including concepts, pathophysiology, diagnosis and approach of the patient with tinnitus and sensitivity to sounds. Literature review with bibliographic survey in LILACS, SciELO, Pubmed and MEDLINE database. Articles and book chapters on tinnitus and sound sensitivity were selected. The several topics were discussed by a group of Brazilian professionals and the conclusions were described. The prevalence of tinnitus has increased over the years, often associated with hearing loss, metabolic factors and inadequate diet. Medical evaluation should be performed carefully to guide the request of subsidiary exams. Currently available treatments range from medications to the use of sounds with specific characteristics and meditation techniques, with variable results. A review on tinnitus and auditory sensitivity was presented, allowing the reader a broad view of the approach to these patients, based on scientific evidence and national experience. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  9. From Sound Morphing to the Synthesis of Starlight. Musical experiences with the Phase Vocoder over 25 years

    Directory of Open Access Journals (Sweden)

    Trevor Wishart

    2013-08-01

    Full Text Available The article reports the author’s experiences with the phase vocoder. Starting from the first attempts during the years 1973-77 – in connection with a speculative project to morph the sounds of a speaking voice into sounds from the natural world, project subsequently developed at Ircam in Paris between 1979 and 1986 – up to the most recent experiences in 2011-12 associated with the realization of Supernova, an 8-channel sound-surround piece, where the phase vocoder data format is used as a synthesis tool.

  10. Impacts of distinct observations during the 2009 Prince William Sound field experiment: A data assimilation study

    Science.gov (United States)

    Li, Z.; Chao, Y.; Farrara, J.; McWilliams, J. C.

    2012-12-01

    A set of data assimilation experiments, known as Observing System Experiments (OSEs), are performed to assess the relative impacts of different types of observations acquired during the 2009 Prince William Sound Field Experiment. The observations assimilated consist primarily of three types: High Frequency (HF) radar surface velocities, vertical profiles of temperature/salinity (T/S) measured by ships, moorings, Autonomous Underwater Vehicles and gliders, and satellite sea surface temperatures (SSTs). The impact of all the observations, HF radar surface velocities, and T/S profiles is assessed. Without data assimilation, a frequently occurring cyclonic eddy in the central Sound is overly persistent and intense. The assimilation of the HF radar velocities effectively reduces these biases and improves the representation of the velocities as well as the T/S fields in the Sound. The assimilation of the T/S profiles improves the large scale representation of the temperature/salinity and also the velocity field in the central Sound. The combination of the HF radar surface velocities and sparse T/S profiles results in an observing system capable of representing the circulation in the Sound reliably and thus producing analyses and forecasts with useful skill. It is suggested that a potentially promising observing network could be based on satellite SSHs and SSTs along with sparse T/S profiles, and future satellite SSHs with wide swath coverage and higher resolution may offer excellent data that will be of great use for predicting the circulation in the Sound.

  11. Sounding-rocket experiments for detailed studies of magnetospheric substorm phenomena

    International Nuclear Information System (INIS)

    Stuedemann, W.; Wilhelm, K.

    1975-01-01

    Many of the substorm effects occur at or near the auroral oval in the upper atmosphere and can thus be studied by sounding-rocket experiments. As emphasis should be laid on understanding the physical processes, close co-ordination with other study programmes is of great importance. This co-ordination can best be accomplished within the framework of the ''International Magnetospheric Study 1976-1978''

  12. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    Science.gov (United States)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  13. Sound propagation in dry granular materials : discrete element simulations, theory, and experiments

    NARCIS (Netherlands)

    Mouraille, O.J.P.

    2009-01-01

    In this study sound wave propagation through different types of dry confined granular systems is studied. With three-dimensional discrete element simulations, theory and experiments, the influence of several micro-scale properties: friction, dissipation, particle rotation, and contact disorder, on

  14. Anthropogenic noise pollution from pile-driving disrupts the structure and dynamics of fish shoals.

    Science.gov (United States)

    Herbert-Read, James E; Kremer, Louise; Bruintjes, Rick; Radford, Andrew N; Ioannou, Christos C

    2017-09-27

    Noise produced from a variety of human activities can affect the physiology and behaviour of individual animals, but whether noise disrupts the social behaviour of animals is largely unknown. Animal groups such as flocks of birds or shoals of fish use simple interaction rules to coordinate their movements with near neighbours. In turn, this coordination allows individuals to gain the benefits of group living such as reduced predation risk and social information exchange. Noise could change how individuals interact in groups if noise is perceived as a threat, or if it masked, distracted or stressed individuals, and this could have impacts on the benefits of grouping. Here, we recorded trajectories of individual juvenile seabass ( Dicentrarchus labrax ) in groups under controlled laboratory conditions. Groups were exposed to playbacks of either ambient background sound recorded in their natural habitat, or playbacks of pile-driving, commonly used in marine construction. The pile-driving playback affected the structure and dynamics of the fish shoals significantly more than the ambient-sound playback. Compared to the ambient-sound playback, groups experiencing the pile-driving playback became less cohesive, less directionally ordered, and were less correlated in speed and directional changes. In effect, the additional-noise treatment disrupted the abilities of individuals to coordinate their movements with one another. Our work highlights the potential for noise pollution from pile-driving to disrupt the collective dynamics of fish shoals, which could have implications for the functional benefits of a group's collective behaviour. © 2017 The Authors.

  15. Driving the SID chip: Assembly language, composition, and sound design for the C64

    Directory of Open Access Journals (Sweden)

    James Newman

    2017-12-01

    Full Text Available The MOS6581, more commonly known as the Sound Interface Device, or SID chip, was the sonic heart of the Commodore 64 home computer. By considering the chip’s development, specification, uses and creative abuses by composers and programmers, alongside its continuing legacy, this paper argues that, more than any other device, the SID chip is responsible for shaping the sound of videogame music. Compared with the brutal atonality of chips such as Atari’s TIA, the SID chip offers a complex 3-channel synthesizer with dynamic waveform selection, per-channel ADSR envelopes, multi-mode filter, ring and cross modulation. However, while the specification is sophisticated, the exploitation of the vagaries and imperfections of the chip are just as significant to its sonic character. As such, the compositional, sound design and programming techniques developed by 1980s composer-coders like Rob Hubbard and Martin Galway are central in defining the distinctive sound of C64 gameplay. Exploring the affordances of the chip and the distinctive ways they were harnessed, the argument of this paper centers on the inexorable link between the technological and the musical. Crucially, composers like Hubbard et al. developed their own bespoke low-level drivers to interface with the SID chip to create pseudo-polyphony through rapid arpeggiation and channel sharing, drum synthesis through waveform manipulation, portamento, and even sample playback. This paper analyses the indivisibility of sound design, synthesis and composition in the birth of these musical forms and aesthetics, and assesses their impact on what would go on to be defined as chiptunes.

  16. Playback Theatre as a tool to enhance communication in medical education

    Directory of Open Access Journals (Sweden)

    Ramiro Salas

    2013-12-01

    Full Text Available Playback Theatre (PT is an improvisational form of theatre in which a group of actors “play back” real life stories told by audience members. In PT, a conductor elicits moments, feelings and stories from audience members, and conducts mini-interviews with those who volunteer a moment of their lives to be re-enacted or “played” for the audience. A musician plays music according to the theme of each story, and 4-5 actors listen to the interview and perform the story that has just been told. PT has been used in a large number of settings as a tool to share stories in an artistic manner. Despite its similarities to psychodrama, PT does not claim to be a form of therapy.We offered two PT performances to first year medical students at Baylor College of Medicine in Houston, Texas, to bring the students a safe and fun environment, conducive to sharing feelings and moments related to being a medical student. Through the moments and stories shared by students, we conclude that there is an enormous need in this population for opportunities to communicate the many emotions associated with medical school and with healthcare-related personal experiences, such as anxiety, pride, or anger. PT proved a powerful tool to help students communicate.

  17. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  18. Transition from hydrodynamic to fast sound in a He-Ne mixture a neutron Brillouin scattering experiment

    CERN Document Server

    Bafile, U; Barocchi, F; Sampoli, M

    2002-01-01

    The presence of a fast-sound mode in the microscopic dynamics of the rare-gas mixture He-Ne, predicted by theoretical studies and molecular-dynamics simulations, was demonstrated by an inelastic neutron scattering experiment. In order to study the transition between the fast and the normal acoustic modes in the hydrodynamic regime, k values lower by about one order of magnitude than in the usual experiments have to be probed. We describe here the results of the first neutron Brillouin scattering experiment performed with this purpose on the same system already investigated at larger k. The results of both experiments, together with those of a new molecular-dynamics simulation, provide a complete and consistent description, still missing so far, of the onset of fast-sound propagation in a binary mixture. (orig.)

  19. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  20. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  1. The effects of unrelated offspring whistle calls on capybaras (Hydrochoerus hydrochaeris

    Directory of Open Access Journals (Sweden)

    E Dos Santos

    Full Text Available Parent-offspring vocal communication, such as the isolation call, is one of the essential adaptations in mammals that adjust parental responsiveness. Thus, our aim was to test the hypothesis that the function of the capybara infants' whistle is to attract conspecifics. We designed a playback experiment to investigate the reaction of 20 adult capybaras (seven males and 13 females to pups' whistle calls – recorded from unrelated offspring – or to bird song, as control. The adult capybaras promptly responded to playback of unrelated pup whistles, while ignoring the bird vocalisation. The adult capybaras took, on average, 2.6 ± 2.5 seconds (s to show a response to the whistles, with no differences between males and females. However, females look longer (17.0 ± 12.9 s than males (3.0 ± 7.2 s toward the sound source when playing the pups' whistle playback. The females also tended to approach the playback source, while males showed just a momentary interruption of ongoing behaviour (feeding. Our results suggest that capybara pups' whistles function as the isolation call in this species, but gender influences the intensity of the response.

  2. 76 FR 78242 - Marine Mammals; File No. 14241

    Science.gov (United States)

    2011-12-16

    ... the acoustic stimuli an animal hears and measures vocalization, behavior, and physiological parameters. Research also involves conducting sound playbacks in a carefully controlled manner and measuring animals... permit holder to conduct research on cetacean behavior, sound production, and responses to sound. The...

  3. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  4. Investigating Empathy-Like Responding to Conspecifics' Distress in Pet Dogs.

    Directory of Open Access Journals (Sweden)

    Mylene Quervel-Chaumette

    Full Text Available Empathy covers a wide range of phenomena varying according to the degree of cognitive complexity involved; ranging from emotional contagion, defined as the sharing of others' emotional states, to sympathetic concern requiring animals to have an appraisal of the others' situation and showing concern-like behaviors. While most studies have investigated how animals reacted in response to conspecifics' distress, dogs so far have mainly been targeted to examine cross-species empathic responses. To investigate whether dogs would respond with empathy-like behavior also to conspecifics, we adopted a playback method using conspecifics' vocalizations (whines recorded during a distressful event as well as control sounds. Our subjects were first exposed to a playback phase where they were subjected either to a control sound, a familiar whine (from their familiar partner or a stranger whine stimulus (from a stranger dog, and then a reunion phase where the familiar partner entered the room. When exposed to whines, dogs showed a higher behavioral alertness and exhibited more stress-related behaviors compared to when exposed to acoustically similar control sounds. Moreover, they demonstrated more comfort-offering behaviors toward their familiar partners following whine playbacks than after control stimuli. Furthermore, when looking at the first session, this comfort offering was biased towards the familiar partner when subjects were previously exposed to the familiar compared to the stranger whines. Finally, familiar whine stimuli tended to maintain higher cortisol levels while stranger whines did not. To our knowledge, these results are the first to suggest that dogs can experience and demonstrate "empathic-like" responses to conspecifics' distress-calls.

  5. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  6. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  7. Quality-Based Backlight Optimization for Video Playback on Handheld Devices

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2007-01-01

    Full Text Available For a typical handheld device, the backlight accounts for a significant percentage of the total energy consumption (e.g., around 30% for a Compaq iPAQ 3650. Substantial energy savings can be achieved by dynamically adapting backlight intensity levels on such low-power portable devices. In this paper, we analyze the characteristics of video streaming services and propose a cross-layer optimization scheme called quality adapted backlight scaling (QABS to achieve backlight energy savings for video playback applications on handheld devices. Specifically, we present a fast algorithm to optimize backlight dimming while keeping the degradation in image quality to a minimum so that the overall service quality is close to a specified threshold. Additionally, we propose two effective techniques to prevent frequent backlight switching, which negatively affects user perception of video. Our initial experimental results indicate that the energy used for backlight is significantly reduced, while the desired quality is satisfied. The proposed algorithms can be realized in real time.

  8. Sound experiences: the vision of experimental musician on the folkloric music in modern society

    Directory of Open Access Journals (Sweden)

    Rieko Tanaka

    2016-11-01

    Full Text Available This work begins narrating how folk music has always been a remnant in the influence on classical composers. It makes special mention of origin Hungarian musicians Bela Bartok, Zoltan Kodaly. This Musicians are considerate in this work as the most immediate ancestors of an experimental musicians northamericans, because both are influenced by their passion for folk music. We select as musicians principals exponents of American experimental music to John Cage, Lou Harrison and Carl Ruggles. Their works will be considered and analyzed in this text as the sounds as the experiences. Composers that will analyze the sound as experience, as feeling, as emotion, as time and origin. related traits in folk music and experimental music. Not forgetting in this work, and in his final considerations, the relationship between the musician, creation, society and art.

  9. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  10. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  11. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  12. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  13. A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant

    Science.gov (United States)

    Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce

    2015-01-01

    Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407

  14. Responses to playback of different subspecies songs in the Reed Bunting (Emberiza s. schoeniclus)

    DEFF Research Database (Denmark)

    Matessi, Giuliano; Dabelsteen, Torben; Pilastro, A.

    2000-01-01

    Populations of Reed Buntings Emberiza schoeniclus in the western Palearctic are classified in two major subspecies groups according to morphology: northern migratory schoeniclus and Mediterranean resident intermedia. Songs of the two groups differ mainly in complexity and syllable structure......, with intermedia songs being more complex. We explored the possibilities of song as a subspecies isolating mechanism by testing if male schoeniclus Reed Buntings reacted differently to field playbacks of songs from their own subspecies group, from the foreign subspecies group and from a control species...

  15. A 'special effort' to provide improved sounding and cloud-motion wind data for FGGE. [First GARP Global Experiment

    Science.gov (United States)

    Greaves, J. R.; Dimego, G.; Smith, W. L.; Suomi, V. E.

    1979-01-01

    Enhancement and editing of high-density cloud motion wind assessments and research satellite soundings have been necessary to improve the quality of data used in The Global Weather Experiment. Editing operations are conducted by a man-computer interactive data access system. Editing will focus on such inputs as non-US satellite data, NOAA operational sounding and wind data sets, wind data from the Indian Ocean satellite, dropwindsonde data, and tropical mesoscale wind data. Improved techniques for deriving cloud heights and higher resolution sounding in meteorologically active areas are principal parts of the data enhancement program.

  16. Three integrated photovoltaic/sound barrier power plants. Construction and operational experience

    International Nuclear Information System (INIS)

    Nordmann, T.; Froelich, A.; Clavadetscher, L.

    2002-01-01

    After an international ideas competition by TNC Switzerland and Germany in 1996, six companies where given the opportunity to construct a prototype of their newly developed integrated PV-sound barrier concepts. The main goal was to develop highly integrated concepts, allowing the reduction of PV sound barrier systems costs, as well as the demonstration of specific concepts for different noise situations. This project is strongly correlated with a German project. Three of the concepts of the competition are demonstrated along a highway near Munich, constructed in 1997. The three Swiss installations had to be constructed at different locations, reflecting three typical situations for sound barriers. The first Swiss installation was the world first Bi-facial PV-sound barrier. It was built on a highway bridge at Wallisellen-Aubrugg in 1997. The operational experience of the installation is positive. But due to the different efficiencies of the two cell sides, its specific yield lies somewhat behind a conventional PV installation. The second Swiss plant was finished in autumn 1998. The 'zig-zag' construction is situated along the railway line at Wallisellen in a densely inhabited area with some local shadowing. Its performance and its specific yield is comparatively low due to a combination of several reasons (geometry of the concept, inverter, high module temperature, local shadows). The third installation was constructed along the motor way A1 at Bruettisellen in 1999. Its vertical panels are equipped with amorphous modules. The report show, that the performance of the system is reasonable, but the mechanical construction has to be improved. A small trial field with cells directly laminated onto the steel panel, also installed at Bruettisellen, could be the key development for this concept. This final report includes the evaluation and comparison of the monitored data in the past 24 months of operation. (author)

  17. Evaluation of a Hear-through device

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Christensen, Flemming

    2014-01-01

    , and simultaneously record and playback the sound, the natural sound reception of the open ear can be recovered. If the sound pressure at both eardrums are correctly reproduced then the complete auditory experience is preserved. A device able of reproducing the sound has been developed and is referred to as the Hear......In the transportable communication platforms available today, such as mobile phones, audio guides etc., earphones are frequently used. Earphones will to some degree block the ear-canals and dim the sounds from the listener’s surroundings. By mounting microphones on the outside of the earphones......-through device. Due to practical limitations, such as the size of the earphones and microphones, it is not possible to record the sound in the ideal position – typically the ear canal entrance. This misplacement of the microphone will introduce small deviations in the reproduced sound compared to the natural...

  18. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  19. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  20. Game Sound from Behind the Sofa

    DEFF Research Database (Denmark)

    Garner, Tom Alexander

    2013-01-01

    The central concern of this thesis is upon the processes by which human beings perceive sound and experience emotions within a computer video gameplay context. The potential of quantitative sound parameters to evoke and modulate emotional experience is explored, working towards the development...... that provide additional support of the hypothetical frameworks: an ecological process of fear, a fear-related model of virtual and real acoustic ecologies, and an embodied virtual acoustic ecology framework. It is intended that this thesis will clearly support more effective and efficient sound design...... practices and also improve awareness of the capacity of sound to generate significant emotional experiences during computer video gameplay. It is further hoped that this thesis will elucidate the potential of biometrics/psychophysiology to allow game designers to better understand the player and to move...

  1. Experiments on second-sound shock waves in superfluid helium

    International Nuclear Information System (INIS)

    Cummings, J.C.; Schmidt, D.W.; Wagner, W.J.

    1978-01-01

    The waveform and velocity of second-sound waves in superfluid helium have been studied experimentally using superconducting, thin-film probes. The second-sound waves were generated with electrical pulses through a resistive film. Variations in pulse power, pulse duration, and bath temperature were examined. As predicted theoretically, the formation of a shock was observed at the leading or trailing edge of the waves depending on bath temperature. Breakdown of the theoretical model was observed for large pulse powers. Accurate data for the acoustic second-sound speed were derived from the measurements of shock-wave velocities and are compared with previous results

  2. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  3. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  4. An experiment towards characterizing seahorse sound in a laboratory controlled environment

    Digital Repository Service at National Institute of Oceanography (India)

    Saran, A.K.; Sreepada, R.A.; Chakraborty, B.; Fernandes, W.A.; Srivastava, R.; Kuncolienker, D.S.; Gawde, G.

    There are many reports of sounds produced by Seahorse (Hippocampus), however, little is known about the mechanism of sound production. Here, we investigate sound produced by the seahorse during feeding. We attempt to try to understand and analyze...

  5. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  6. In situ mortality experiments with juvenile sea bass (Dicentrarchus labrax in relation to impulsive sound levels caused by pile driving of windmill foundations.

    Directory of Open Access Journals (Sweden)

    Elisabeth Debusschere

    Full Text Available Impact assessments of offshore wind farm installations and operations on the marine fauna are performed in many countries. Yet, only limited quantitative data on the physiological impact of impulsive sounds on (juvenile fishes during pile driving of offshore wind farm foundations are available. Our current knowledge on fish injury and mortality due to pile driving is mainly based on laboratory experiments, in which high-intensity pile driving sounds are generated inside acoustic chambers. To validate these lab results, an in situ field experiment was carried out on board of a pile driving vessel. Juvenile European sea bass (Dicentrarchus labrax of 68 and 115 days post hatching were exposed to pile-driving sounds as close as 45 m from the actual pile driving activity. Fish were exposed to strikes with a sound exposure level between 181 and 188 dB re 1 µPa².s. The number of strikes ranged from 1739 to 3067, resulting in a cumulative sound exposure level between 215 and 222 dB re 1 µPa².s. Control treatments consisted of fish not exposed to pile driving sounds. No differences in immediate mortality were found between exposed and control fish groups. Also no differences were noted in the delayed mortality up to 14 days after exposure between both groups. Our in situ experiments largely confirm the mortality results of the lab experiments found in other studies.

  7. Brain-based decoding of mentally imagined film clips and sounds reveals experience-based information patterns in film professionals.

    Science.gov (United States)

    de Borst, Aline W; Valente, Giancarlo; Jääskeläinen, Iiro P; Tikka, Pia

    2016-04-01

    In the perceptual domain, it has been shown that the human brain is strongly shaped through experience, leading to expertise in highly-skilled professionals. What has remained unclear is whether specialization also shapes brain networks underlying mental imagery. In our fMRI study, we aimed to uncover modality-specific mental imagery specialization of film experts. Using multi-voxel pattern analysis we decoded from brain activity of professional cinematographers and sound designers whether they were imagining sounds or images of particular film clips. In each expert group distinct multi-voxel patterns, specific for the modality of their expertise, were found during classification of imagery modality. These patterns were mainly localized in the occipito-temporal and parietal cortex for cinematographers and in the auditory cortex for sound designers. We also found generalized patterns across perception and imagery that were distinct for the two expert groups: they involved frontal cortex for the cinematographers and temporal cortex for the sound designers. Notably, the mental representations of film clips and sounds of cinematographers contained information that went beyond modality-specificity. We were able to successfully decode the implicit presence of film genre from brain activity during mental imagery in cinematographers. The results extend existing neuroimaging literature on expertise into the domain of mental imagery and show that experience in visual versus auditory imagery can alter the representation of information in modality-specific association cortices. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Design and evaluation of a higher-order spherical microphone/ambisonic sound reproduction system for the acoustical assessment of concert halls

    Science.gov (United States)

    Clapp, Samuel W.

    Previous studies of the perception of concert hall acoustics have generally employed two methods for soliciting listeners' judgments. One method is to have listeners rate the sound in a hall while physically present in that hall. The other method is to make recordings of different halls and seat positions, and then recreate the environment for listeners in a laboratory setting via loudspeakers or headphones. In situ evaluations offer a completely faithful rendering of all aspects of the concert hall experience. However, many variables cannot be controlled and the short duration of auditory memory precludes an objective comparison of different spaces. Simulation studies allow for more control over various aspects of the evaluations, as well as A/B comparisons of different halls and seat positions. The drawback is that all simulation methods suffer from limitations in the accuracy of reproduction. If the accuracy of the simulation system is improved, then the advantages of the simulation method can be retained, while mitigating its disadvantages. Spherical microphone array technology has received growing interest in the acoustics community in recent years for many applications including beamforming, source localization, and other forms of three-dimensional sound field analysis. These arrays can decompose a measured sound field into its spherical harmonic components, the spherical harmonics being a set of spatial basis functions on the sphere that are derived from solving the wave equation in spherical coordinates. Ambisonics is a system for two- and three-dimensional spatialized sound that is based on recreating a sound field from its spherical harmonic components. Because of these shared mathematical underpinnings, ambisonics provides a natural way to present fully spatialized renderings of recordings made with a spherical microphone array. Many of the previously studied applications of spherical microphone arrays have used a narrow frequency range where the array

  9. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  10. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  11. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  12. Audio-visual interactions in product sound design

    NARCIS (Netherlands)

    Özcan, E.; Van Egmond, R.

    2010-01-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral

  13. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  14. Specially Designed Sound-Boxes Used by Students to Perform School-Lab Sensor–Based Experiments, to Understand Sound Phenomena

    Directory of Open Access Journals (Sweden)

    Stefanos Parskeuopoulos

    2011-02-01

    Full Text Available The research presented herein investigates and records students’ perceptions relating to sound phenomena and their improvement during a specialised laboratory practice utilizing ICT and a simple experimental apparatus, especially designed for teaching. This school-lab apparatus and its operation are also described herein. A number of 71 first and second grade Vocational-school students, aged 16 to 20, participated in the research. These were divided into groups of 4-5 students, each of which worked for 6 hours in order to complete all activities assigned. Data collection was carried out through personal interviews as well as questionnaires which were distributed before and after the instructive intervention. The results shows that students’ active involvement with the simple teaching apparatus, through which the effects of sound waves are visible, helps them comprehend sound phenomena. It also altered considerably their initial misconceptions about sound propagation. The results are presented diagrammatically herein, while some important observations are made, relating to the teaching and learning of scientific concepts concerning sound.

  15. Parental Beliefs and Experiences Regarding Involvement in Intervention for Their Child with Speech Sound Disorder

    Science.gov (United States)

    Watts Pappas, Nicole; McAllister, Lindy; McLeod, Sharynne

    2016-01-01

    Parental beliefs and experiences regarding involvement in speech intervention for their child with mild to moderate speech sound disorder (SSD) were explored using multiple, sequential interviews conducted during a course of treatment. Twenty-one interviews were conducted with seven parents of six children with SSD: (1) after their child's initial…

  16. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  17. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  18. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  19. Active Noise Control Experiments using Sound Energy Flu

    Science.gov (United States)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  20. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    NARCIS (Netherlands)

    Colizoli, O.; Murre, J.M.J.; Rouw, R.

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the

  1. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  2. Heat Transfer by Thermo-capillary Convection -Sounding Rocket COMPERE Experiment SOURCE

    Science.gov (United States)

    Dreyer, Michael; Fuhrmann, Eckart

    The sounding rocket COMPERE experiment SOURCE was successfully flown on MASER 11, launched in Kiruna (ESRANGE), May 15th, 2008. SOURCE has been intended to partly ful-fill the scientific objectives of the European Space Agency (ESA) Microgravity Applications Program (MAP) project AO-2004-111 (Convective boiling and condensation). Three parties of principle investigators have been involved to design the experiment set-up: ZARM for thermo-capillary flows, IMFT (Toulouse, France) for boiling studies, EADS Astrium (Bremen, Ger-many) for depressurization. The topic of this paper is to study the effect of wall heat flux on the contact line of the free liquid surface and to obtain a correlation for a convective heat trans-fer coefficient. The experiment has been conducted along a predefined time line. A preheating sequence at ground was the first operation to achieve a well defined temperature evolution within the test cell and its environment inside the rocket. Nearly one minute after launch, the pressurized test cell was filled with the test liquid HFE-7000 until a certain fill level was reached. Then the free surface could be observed for 120 s without distortion. Afterwards, the first depressurization was started to induce subcooled boiling, the second one to start saturated boiling. The data from the flight consists of video images and temperature measurements in the liquid, the solid, and the gaseous phase. Data analysis provides the surface shape versus time and the corresponding apparent contact angle. Computational analysis provides information for the determination of the heat transfer coefficient in a compensated gravity environment where a flow is caused by the temperature difference between the hot wall and the cold liquid. The paper will deliver correlations for the effective contact angle and the heat transfer coefficient as a function of the relevant dimensionsless parameters as well as physical explanations for the observed behavior. The data will be used

  3. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  4. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  5. Dynamic compression and sound quality of music

    NARCIS (Netherlands)

    Lieshout, van R.A.J.M.; Wagenaars, W.M.; Houtsma, A.J.M.; Stikvoort, E.F.

    1984-01-01

    Amplitude compression is often used to match the dynamic: range of music to a particular playback situation in order to ensure, e .g ., continuous audibility in a noisy environment or unobtrusiveness if the music is intended as a quiet background. Since amplitude compression is a nonlinear process,

  6. The Early Years: Becoming Attuned to Sound

    Science.gov (United States)

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  7. Responses of male sperm whales (Physeter macrocephalus) to killer whale sounds: implications for anti-predator strategies.

    Science.gov (United States)

    Curé, Charlotte; Antunes, Ricardo; Alves, Ana Catarina; Visser, Fleur; Kvadsheim, Petter H; Miller, Patrick J O

    2013-01-01

    Interactions between individuals of different cetacean species are often observed in the wild. Killer whales (Orcinus orca) can be potential predators of many other cetaceans, and the interception of their vocalizations by unintended cetacean receivers may trigger anti-predator behavior that could mediate predator-prey interactions. We explored the anti-predator behaviour of five typically-solitary male sperm whales (Physeter macrocephalus) in the Norwegian Sea by playing sounds of mammal-feeding killer whales and monitoring behavioural responses using multi-sensor tags. Our results suggest that, rather than taking advantage of their large aerobic capacities to dive away from the perceived predator, sperm whales responded to killer whale playbacks by interrupting their foraging or resting dives and returning to the surface, changing their vocal production, and initiating a surprising degree of social behaviour in these mostly solitary animals. Thus, the interception of predator vocalizations by male sperm whales disrupted functional behaviours and mediated previously unrecognized anti-predator responses.

  8. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  9. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    Science.gov (United States)

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Letter-Sound Knowledge: Exploring Gender Differences in Children When They Start School Regarding Knowledge of Large Letters, Small Letters, Sound Large Letters, and Sound Small Letters

    Directory of Open Access Journals (Sweden)

    Hermundur Sigmundsson

    2017-09-01

    Full Text Available This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects.

  11. Do Père David's deer lose memories of their ancestral predators?

    Directory of Open Access Journals (Sweden)

    Chunwang Li

    Full Text Available Whether prey retains antipredator behavior after a long period of predator relaxation is an important question in predator-prey evolution. Père David's deer have been raised in enclosures for more than 1200 years and this isolation provides an opportunity to study whether Père David's deer still respond to the cues of their ancestral predators or to novel predators. We played back the sounds of crows (familiar sound and domestic dogs (familiar non-predators, of tigers and wolves (ancestral predators, and of lions (potential naïve predator to Père David's deer in paddocks, and blank sounds to the control group, and videoed the behavior of the deer during the experiment. We also showed life-size photo models of dog, leopard, bear, tiger, wolf, and lion to the deer and video taped their responses after seeing these models. Père David's deer stared at and approached the hidden loudspeaker when they heard the roars of tiger or lion. The deer listened to tiger roars longer, approached to tiger roars more and spent more time staring at the tiger model. The stags were also found to forage less in the trials of tiger roars than that of other sound playbacks. Additionally, it took longer for the deer to restore their normal behavior after they heard tiger roars, which was longer than that after the trial of other sound playbacks. Moreover, the deer were only found to walk away after hearing the sounds of tiger and wolf. Therefore, the tiger was probably the main predator for Père David's deer in ancient time. Our study implies that Père David's deer still retain the memories of the acoustic and visual cues of their ancestral predators in spite of the long term isolation from natural habitat.

  12. Engagement and EMG in serious gaming : Experimenting with sound and dynamics in the levee patroller training game

    NARCIS (Netherlands)

    Schuurink, E.L.; Houtkamp, J.; Toet, A.

    2008-01-01

    We measured the effects of sound and visual dynamic elements on user experience of a serious game, with special interest in engagement and arousal. Engagement was measured through questionnaires and arousal through the SAM and electromyography (EMG). We adopted the EMG of the corrugator (frown

  13. Bubbles that Change the Speed of Sound

    Science.gov (United States)

    Planinšič, Gorazd; Etkina, Eugenia

    2012-11-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."2 In this paper we describe a simple and robust experiment that allows an easy audio and visual demonstration of the same effect (unfortunately without the chocolate) and offers several possibilities for student investigations. In addition to the demonstration of the above effect, the experiments described below provide an excellent opportunity for students to devise and test explanations with simple equipment.

  14. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  15. An Undergraduate Experiment for the Measurement of the Speed of Sound in Air: Phenomena and Discussion

    Science.gov (United States)

    Yang, Hujiang; Zhao, Xiaohong; Wang, Xin; Xiao, Jinghua

    2012-01-01

    In this paper, we present and discuss some phenomena in an undergraduate experiment for the measurement of the speed of sound in air. A square wave distorts when connected to a piezoelectric transducer. Moreover, the amplitude of the receiving signal varies with the driving frequency. Comparing with the Gibbs phenomenon, these phenomena can be…

  16. Bubbles That Change the Speed of Sound

    Science.gov (United States)

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  17. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    Directory of Open Access Journals (Sweden)

    Olympia eColizoli

    2013-10-01

    Full Text Available Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of nonlinguistic sounds induce the experience of taste, smell and physical sensations for SC. SC’s lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory, taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC’s performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC’s synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (tasty vs. tasteless words. Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC’s synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC’s synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli could be differentiated based on patterns of brain activity.

  18. STEGANOGRAPHY USAGE TO CONTROL MULTIMEDIA STREAM

    Directory of Open Access Journals (Sweden)

    Grzegorz Koziel

    2014-03-01

    Full Text Available In the paper, a proposal of new application for steganography is presented. It is possible to use steganographic techniques to control multimedia stream playback. Special control markers can be included in the sound signal and the player can detect markers and modify the playback parameters according to the hidden instructions. This solution allows for remembering user preferences within the audio track as well as allowing for preparation of various versions of the same content at the production level.

  19. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  20. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  1. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  2. GammaLog Playback 1.0 - mobile gamma ray spectrometry software

    International Nuclear Information System (INIS)

    Watson, R.J.; Smethurst, M.A.

    2011-01-01

    The Geological Survey of Norway (NGU) operates a mobile gamma ray spectrometer system which can be used in nuclear emergency situations to determine the location and type of orphan sources, or the extent and type of fallout contamination. The system consists of a 20 litre (16 litre downward and 4 litre upward looking) RSX-5 NaI detector and spectrometer, and can be mounted in fixed wing aircraft, helicopters, or vans/cars as appropriate. NGU has developed its own data acquisition and analysis software for this system. GammaLog (Smethurst 2005) controls the acquisition, display, and storage of data from the spectrometer, and performs real-time data analysis including estimation of dose rates and fallout concentrations, and separation of geological and anthropogenic components of the signal. The latter is particularly important where the geological radioisotope signal varies strongly from one place to another, and makes it easier to locate and identify anthropogenic sources which might otherwise be difficult to separate from the geological background signal. A modified version of GammaLog has been developed, GammaLog Playback, which allows the replay of previously acquired GammaLog datasets, while performing similar processing and display as the GammaLog acquisition software. This allows datasets to be reviewed and compared in the field or during post-survey analysis to help plan subsequent measurement strategies.(Au)

  3. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  4. School is out on noisy reefs: the effect of boat noise on predator learning and survival of juvenile coral reef fishes.

    Science.gov (United States)

    Ferrari, Maud C O; McCormick, Mark I; Meekan, Mark G; Simpson, Stephen D; Nedelec, Sophie L; Chivers, Douglas P

    2018-01-31

    Noise produced by anthropogenic activities is increasing in many marine ecosystems. We investigated the effect of playback of boat noise on fish cognition. We focused on noise from small motorboats, since its occurrence can dominate soundscapes in coastal communities, the number of noise-producing vessels is increasing rapidly and their proximity to marine life has the potential to cause deleterious effects. Cognition-or the ability of individuals to learn and remember information-is crucial, given that most species rely on learning to achieve fitness-promoting tasks, such as finding food, choosing mates and recognizing predators. The caveat with cognition is its latent effect: the individual that fails to learn an important piece of information will live normally until the moment where it needs the information to make a fitness-related decision. Such latent effects can easily be overlooked by traditional risk assessment methods. Here, we conducted three experiments to assess the effect of boat noise playbacks on the ability of fish to learn to recognize predation threats, using a common, conserved learning paradigm. We found that fish that were trained to recognize a novel predator while being exposed to 'reef + boat noise' playbacks failed to subsequently respond to the predator, while their 'reef noise' counterparts responded appropriately. We repeated the training, giving the fish three opportunities to learn three common reef predators, and released the fish in the wild. Those trained in the presence of 'reef + boat noise' playbacks survived 40% less than the 'reef noise' controls over our 72 h monitoring period, a performance equal to that of predator-naive fish. Our last experiment indicated that these results were likely due to failed learning, as opposed to stress effects from the sound exposure. Neither playbacks nor real boat noise affected survival in the absence of predator training. Our results indicate that boat noise has the potential to cause

  5. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  6. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  7. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    Directory of Open Access Journals (Sweden)

    Loes J Bolle

    Full Text Available In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2 (zero to peak pressures up to 32 kPa and single pulse sound exposure levels up to 186 dB re 1µPa(2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  8. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  9. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  10. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    Science.gov (United States)

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity

  11. On the use of binaural recordings for dynamic binaural reproduction

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Christensen, Flemming

    2011-01-01

    Binaural recordings are considered applicable only for static binaural reproduction. That is, playback of binaural recordings can only reproduce the sound field captured for the fixed position and orientation of the recording head. However, given some conditions it is possible to use binaural...... recordings for the reproduction of binaural signals that change according to the listener actions, i.e. dynamic binaural reproduction. Here we examine the conditions that allow for such dynamic recording/playback configuration and discuss advantages and disadvantages. Analysis and discussion focus on two...

  12. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  13. Measuring the speed of sound in air using smartphone applications

    Science.gov (United States)

    Yavuz, A.

    2015-05-01

    This study presents a revised version of an old experiment available in many textbooks for measuring the speed of sound in air. A signal-generator application in a smartphone is used to produce the desired sound frequency. Nodes of sound waves in a glass pipe, of which one end is immersed in water, are more easily detected, so results can be obtained more quickly than from traditional acoustic experiments using tuning forks.

  14. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    Science.gov (United States)

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  15. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  16. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  17. Microstructure representations for sound absorbing fibrous media: 3D and 2D multiscale modelling and experiments

    Science.gov (United States)

    Zieliński, Tomasz G.

    2017-11-01

    The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.

  18. Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces.

    Science.gov (United States)

    Ekman, Maria Rådsten; Lundén, Peter; Nilsson, Mats E

    2015-11-01

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-field microphone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  19. Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.

    Science.gov (United States)

    Stilp, Christian E; Assgari, Ashley A

    2018-02-28

    Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur

  20. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  1. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Science.gov (United States)

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the

  2. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Directory of Open Access Journals (Sweden)

    Xiuwen Sun

    2018-03-01

    Full Text Available Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity, each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent. In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates

  3. Exploring the relationship between nature sounds, connectedness to nature, mood and willingness to buy sustainable food: A retail field experiment.

    Science.gov (United States)

    Spendrup, Sara; Hunter, Erik; Isgren, Ellinor

    2016-05-01

    Nature sounds are increasingly used by some food retailers to enhance in-store ambiance and potentially even influence sustainable food choices. An in-store, 2 × 3 between-subject full factorial experiment conducted on 627 customers over 12 days tested whether nature sound directly and indirectly influenced willingness to buy (WTB) sustainable foods. The results show that nature sounds positively and directly influence WTB organic foods in groups of customers (men) that have relatively low initial intentions to buy. Indirectly, we did not find support for the effect of nature sound on influencing mood or connectedness to nature (CtN). However, we show that information on the product's sustainability characteristics moderates the relationship between CtN and WTB in certain groups. Namely, when CtN is high, sustainability information positively moderated WTB both organic and climate friendly foods in men. Conversely, when CtN was low, men expressed lower WTB organic and climate friendly foods than identical, albeit conventionally labelled products. Consequently, our study concludes that nature sounds might be an effective, yet subtle in-store tool to use on groups of consumers who might otherwise respond negatively to more overt forms of sustainable food information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical...... the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli....

  5. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  6. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  7. Sound Synthesis Affected by Physical Gestures in Real-Time

    DEFF Research Database (Denmark)

    Graugaard, Lars

    2006-01-01

    Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Two implementations are presented and experience with their use in performance is reported. A concept of sound shaping and sound colouring that connects an instrumental performer......’s playing and gesturest to sound synthesis is used. The results and future possibilities are discussed....

  8. Intense Ultrasonic clicks from echolocating toothed whales do not elicit anti-predator responses or debilitate the squid Loligo pealeii

    DEFF Research Database (Denmark)

    Wilson, Maria; Hanlon, Roger; Tyack, Peter

    2007-01-01

    an evolutionary selection pressure on cephalopods to develop a mechanism for detecting and evading sound-emitting toothed whale predators. Ultrasonic detection has evolved in some insects to avoid echolocating bats, and it can be hypothesized that cephalopods might have evolved similar ultrasound detection...... as an anti-predation measure. We test this hypothesis in the squid Loligo pealeii in a playback experiment using intense echolocation clicks from two squid-eating toothed whale species. Twelve squid were exposed to clicks at two repetition rates (16 and 125 clicks per second) with received sound pressure...... levels of 199-226dBre1μPa (pp) mimicking the sound exposure from an echolocating toothed whale as it approaches and captures prey. We demonstrate that intense ultrasonic clicks do not elicit any detectable anti-predator behaviour in L. pealeii and that clicks with received levels up to 226dBre1μPa (pp...

  9. Environmental Sound Training in Cochlear Implant Users

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian

    2015-01-01

    Purpose: The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method: Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart,…

  10. Kinetic-sound propagation in dilute gas mixtures

    International Nuclear Information System (INIS)

    Campa, A.; Cohen, E.G.D.

    1989-01-01

    Kinetic sound is predicted in dilute disparate-mass binary gas mixtures, propagating exclusively in the light compound and much faster than ordinary sound. It should be detectable by light-scattering experiments, as an extended shoulder in the scattering cross section for large frequencies. As an example, H 2 -Ar mixtures are discussed

  11. Similarities between the irrelevant sound effect and the suffix effect.

    Science.gov (United States)

    Hanley, J Richard; Bourgaize, Jake

    2018-03-29

    Although articulatory suppression abolishes the effect of irrelevant sound (ISE) on serial recall when sequences are presented visually, the effect persists with auditory presentation of list items. Two experiments were designed to test the claim that, when articulation is suppressed, the effect of irrelevant sound on the retention of auditory lists resembles a suffix effect. A suffix is a spoken word that immediately follows the final item in a list. Even though participants are told to ignore it, the suffix impairs serial recall of auditory lists. In Experiment 1, the irrelevant sound consisted of instrumental music. The music generated a significant ISE that was abolished by articulatory suppression. It therefore appears that, when articulation is suppressed, irrelevant sound must contain speech for it to have any effect on recall. This is consistent with what is known about the suffix effect. In Experiment 2, the effect of irrelevant sound under articulatory suppression was greater when the irrelevant sound was spoken by the same voice that presented the list items. This outcome is again consistent with the known characteristics of the suffix effect. It therefore appears that, when rehearsal is suppressed, irrelevant sound disrupts the acoustic-perceptual encoding of auditorily presented list items. There is no evidence that the persistence of the ISE under suppression is a result of interference to the representation of list items in a postcategorical phonological store.

  12. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  13. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments

    NARCIS (Netherlands)

    Bolle, L.J.; Jong, C.A.F. de; Bierman, S.M.; Beek, P.J.G. van; Keeken, O.A. van; Wessels, P.W.; Damme, C.J.G. van; Winter, H.V.; Haan, D. de; Dekeling, R.P.A.

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at

  14. Phonaesthemes and sound symbolism in Swedish brand names

    Directory of Open Access Journals (Sweden)

    Åsa Abelin

    2015-01-01

    Full Text Available This study examines the prevalence of sound symbolism in Swedish brand names. A general principle of brand name design is that effective names should be distinctive, recognizable, easy to pronounce and meaningful. Much money is invested in designing powerful brand names, where the emotional impact of the names on consumers is also relevant and it is important to avoid negative connotations. Customers prefer brand names, which say something about the product, as this reduces product uncertainty (Klink, 2001. Therefore, consumers might prefer sound symbolic names. It has been shown that people associate the sounds of the nonsense words maluma and takete with round and angular shapes, respectively. By extension, more complex shapes and textures might activate words containing certain sounds. This study focuses on semantic dimensions expected to be relevant to product names, such as mobility, consistency, texture and shape. These dimensions are related to the senses of sight, hearing and touch and are also interesting from a cognitive linguistic perspective. Cross-modal assessment and priming experiments with pictures and written words were performed and the results analysed in relation to brand name databases and to sound symbolic sound combinations in Swedish (Abelin, 1999. The results show that brand names virtually never contain pejorative, i.e. depreciatory, consonant clusters, and that certain sounds and sound combinations are overrepresented in certain content categories. Assessment tests show correlations between pictured objects and phoneme combinations in newly created words (non-words. The priming experiment shows that object images prime newly created words as expected, based on the presence of compatible consonant clusters.

  15. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Effect of Sound Waves on Decarburization Rate of Fe-C Melt

    Science.gov (United States)

    Komarov, Sergey V.; Sano, Masamichi

    2018-02-01

    Sound waves have the ability to propagate through a gas phase and, thus, to supply the acoustic energy from a sound generator to materials being processed. This offers an attractive tool, for example, for controlling the rates of interfacial reactions in steelmaking processes. This study investigates the kinetics of decarburization in molten Fe-C alloys, the surface of which was exposed to sound waves and Ar-O2 gas blown onto the melt surface. The main emphasis is placed on clarifying effects of sound frequency, sound pressure, and gas flow rate. A series of water model experiments and numerical simulations are also performed to explain the results of high-temperature experiments and to elucidate the mechanism of sound wave application. This is explained by two phenomena that occur simultaneously: (1) turbulization of Ar-O2 gas flow by sound wave above the melt surface and (2) motion and agitation of the melt surface when exposed to sound wave. It is found that sound waves can both accelerate and inhibit the decarburization rate depending on the Ar-O2 gas flow rate and the presence of oxide film on the melt surface. The effect of sound waves is clearly observed only at higher sound pressures on resonance frequencies, which are defined by geometrical features of the experimental setup. The resonance phenomenon makes it difficult to separate the effect of sound frequency from that of sound pressure under the present experimental conditions.

  17. Multimedia consultation session recording and playback using Java-based browser in global PACS

    Science.gov (United States)

    Martinez, Ralph; Shah, Pinkesh J.; Yu, Yuan-Pin

    1998-07-01

    The current version of the Global PACS software system uses a Java-based implementation of the Remote Consultation and Diagnosis (RCD) system. The Java RCD includes a multimedia consultation session between physicians that includes text, static image, image annotation, and audio data. The JAVA RCD allows 2-4 physicians to collaborate on a patient case. It allows physicians to join the session via WWW Java-enabled browsers or stand alone RCD application. The RCD system includes a distributed database archive system for archiving and retrieving patient and session data. The RCD system can be used for store and forward scenarios, case reviews, and interactive RCD multimedia sessions. The RCD system operates over the Internet, telephone lines, or in a private Intranet. A multimedia consultation session can be recorded, and then played back at a later time for review, comments, and education. A session can be played back using Java-enabled WWW browsers on any operating system platform. The JAVA RCD system shows that a case diagnosis can be captured digitally and played back with the original real-time temporal relationships between data streams. In this paper, we describe design and implementation of the RCD session playback.

  18. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  19. Research on fiber Bragg grating heart sound sensing and wavelength demodulation method

    Science.gov (United States)

    Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang

    2010-11-01

    Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.

  20. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  1. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  2. Response of round gobies, Neogobius melanostomus, to conspecific sounds

    Science.gov (United States)

    Isabella-Valenzi, Lisa

    A useful model group to examine reproductive plasticity in acoustic responsiveness is the family Gobiidae. Male round gobies Neogobius melanostomus emit calls and females respond to these calls with high specificity. The current study investigates differential attraction between reproductive morphologies of the goby to conspecific calls and explores the use of calls to develop a bioacoustic trap. Behavioural responsiveness to conspecific calls was tested using playback experiments in the lab and field. Females showed a strong attraction to the grunt call in both the lab and field, while nonreproductive and sneaker males preferred the drum call in the lab, but favoured the grunt call in the field. By determining the relationship between reproductive state and auditory responsiveness to conspecific calls, I am further elucidating the function of acoustic communication in the round goby and may be essential when creating control strategies to prevent the spread of the invasive species.

  3. Disturbance-specific social responses in long-finned pilot whales, Globicephala melas

    NARCIS (Netherlands)

    Visser, F.; Curé, C.; Kvadsheim, P.H.; Lam, F.P.A.; Tyack, P.L.; Miller, P.J.O.

    2016-01-01

    Social interactions among animals can influence their response to disturbance. We investigated responses of long-finned pilot whales to killer whale sound playbacks and two anthropogenic sources of disturbance: Tagging effort and naval sonar exposure. The acoustic scene and diving behaviour of

  4. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  5. Two Shared Rapid Turn Taking Sound Interfaces for Novices

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie; Andersen, Hans Jørgen; Raudaskoski, Pirkko Liisa

    2012-01-01

    in an interleaved fashion: Systems A and B used a fuzzy logic algorithm and pattern recognition to respond with modifications of a background rhythms. In an experiment with a pen tablet interface as the music instrument, users aged 10-13 were to tap tones and continue each other’s melody. The sound systems rewarded......This paper presents the results of user interaction with two explorative music environments (sound system A and B) that were inspired from the Banda Linda music tradition in two different ways. The sound systems adapted to how a team of two players improvised and made a melody together...... users sonically, if they managed to add tones to their mutual melody in a rapid turn taking manner with rhythmical patterns. Videos of experiment sessions show that user teams contributed to a melody in ways that resemble conversation. Interaction data show that each sound system made player teams play...

  6. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  7. Concepts for evaluation of sound insulation of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  8. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  9. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  10. Future Roles of Film Sound Design in VR-applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2005-01-01

    Virtual Environments offers new affordances for multimodal experiences. However, most research has been done on the visual modality and technologies supporting this sense In this paper new approaches to sound design for VE are proposed. Utilizing principles of film sound can prove beneficial in c...

  11. The NASA Sounding Rocket Program and space sciences

    Science.gov (United States)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  12. Evaluation of 3D Positioned Sound in Multimodal Scenarios

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard

    present but interacts with the other meeting members using different virtual reality technologies. The thesis also dealt with a 3D sound system in trucks. it was investigated if 3D-sound could be used to give the truck driver an audible and lifelike experience of the cyclists’ position, in relation......This Ph.D. study has dealt with different binaural methods for implementing 3D sound in selected multimodal applications, with the purpose of evaluating the feasibility of using 3D sound in these applications. The thesis dealt with a teleconference application in which one person is not physically...

  13. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  14. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  15. PICTURE: a sounding rocket experiment for direct imaging of an extrasolar planetary environment

    Science.gov (United States)

    Mendillo, Christopher B.; Hicks, Brian A.; Cook, Timothy A.; Bifano, Thomas G.; Content, David A.; Lane, Benjamin F.; Levine, B. Martin; Rabin, Douglas; Rao, Shanti R.; Samuele, Rocco; Schmidtlin, Edouard; Shao, Michael; Wallace, J. Kent; Chakrabarti, Supriya

    2012-09-01

    The Planetary Imaging Concept Testbed Using a Rocket Experiment (PICTURE 36.225 UG) was designed to directly image the exozodiacal dust disk of ǫ Eridani (K2V, 3.22 pc) down to an inner radius of 1.5 AU. PICTURE carried four key enabling technologies on board a NASA sounding rocket at 4:25 MDT on October 8th, 2011: a 0.5 m light-weight primary mirror (4.5 kg), a visible nulling coronagraph (VNC) (600-750 nm), a 32x32 element MEMS deformable mirror and a milliarcsecond-class fine pointing system. Unfortunately, due to a telemetry failure, the PICTURE mission did not achieve scientific success. Nonetheless, this flight validated the flight-worthiness of the lightweight primary and the VNC. The fine pointing system, a key requirement for future planet-imaging missions, demonstrated 5.1 mas RMS in-flight pointing stability. We describe the experiment, its subsystems and flight results. We outline the challenges we faced in developing this complex payload and our technical approaches.

  16. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  17. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  18. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  19. Detecting interferences with iOS applications to measure speed of sound

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-01-01

    Traditional experiments measuring the speed of sound consist of studying harmonics by changing the length of a glass tube closed at one end. In these experiments, the sound source and observer are outside of the tube. In this paper, we propose the modification of this old experiment by studying destructive interference in a pipe using a headset, iPhone and iPad. The iPhone is used as an emitter with signal generator application and the iPad is used as the receiver with a spectrogram application. Two experiments are carried out for measures: the emitter inside of the tube with the receiver outside, and vice versa. We conclude that it is even possible to adequately and easily measure the speed of sound using a cup or a can of coke with the method described in this paper.

  20. Making Sense of Sound

    Science.gov (United States)

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  1. The relationship between target quality and interference in sound zones

    DEFF Research Database (Denmark)

    Baykaner, Khan; Coleman, Phillip; Mason, Russell

    2015-01-01

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced...... audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity...

  2. Subjective evaluation of dynamic compression in music

    NARCIS (Netherlands)

    Wagenaars, W.M.; Houtsma, A.J.M.; Lieshout, van R.A.J.M.

    1986-01-01

    Amplitude compression is often used to match the dynamic range of music to a particular playback situation so as to ensure continuous audibility in a noisy environment. Since amplitude compression is a nonlinear process, it is potentially very damaging to sound quality. Three physical parameters of

  3. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  4. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  5. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  6. How do "mute" cicadas produce their calling songs?

    Directory of Open Access Journals (Sweden)

    Changqing Luo

    Full Text Available Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as "mute". This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the "mute" cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism.

  7. How do "mute" cicadas produce their calling songs?

    Science.gov (United States)

    Luo, Changqing; Wei, Cong; Nansen, Christian

    2015-01-01

    Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as "mute". This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the "mute" cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism.

  8. How Do “Mute” Cicadas Produce Their Calling Songs?

    Science.gov (United States)

    Luo, Changqing; Wei, Cong; Nansen, Christian

    2015-01-01

    Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as “mute”. This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the “mute” cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism. PMID:25714608

  9. Is 1/f sound more effective than simple resting in reducing stress response?

    Science.gov (United States)

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  10. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    Science.gov (United States)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  11. Cross-Modal Correspondence between Brightness and Chinese Speech Sound with Aspiration

    Directory of Open Access Journals (Sweden)

    Sachiko Hirata

    2011-10-01

    Full Text Available Phonetic symbolism is the phenomenon of speech sounds evoking images based on sensory experiences; it is often discussed with cross-modal correspondence. By using Garner's task, Hirata, Kita, and Ukita (2009 showed the cross-modal congruence between brightness and voiced/voiceless consonants in Japanese speech sound, which is known as phonetic symbolism. In the present study, we examined the effect of the meaning of mimetics (lexical words whose sound reflects its meaning, like “ding-dong” in Japanese language on the cross-modal correspondence. We conducted an experiment with Chinese speech sounds with or without aspiration using Chinese people. Chinese vocabulary also contains mimetics but the existence of aspiration doesn't relate to the meaning of Chinese mimetics. As a result, Chinese speech sounds with aspiration, which resemble voiceless consonants, were matched with white color, whereas those without aspiration were matched with black. This result is identical to its pattern in Japanese people and consequently suggests that cross-modal correspondence occurs without the effect of the meaning of mimetics. The problem that whether these cross-modal correspondences are purely based on physical properties of speech sound or affected from phonetic properties remains for further study.

  12. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. On Sound: Reconstructing a Zhuangzian Perspective of Music

    Directory of Open Access Journals (Sweden)

    So Jeong Park

    2015-12-01

    Full Text Available A devotion to music in Chinese classical texts is worth noticing. Early Chinese thinkers saw music as a significant part of human experience and a core practice for philosophy. While Confucian endorsement of ritual and music has been discussed in the field, Daoist understanding of music was hardly explored. This paper will make a careful reading of the Xiánchí 咸池 music story in the Zhuangzi, one of the most interesting, but least noticed texts, and reconstruct a Zhuangzian perspective from it. While sounds had been regarded as mere building blocks of music and thus depreciated in the hierarchical understanding of music in the mainstream discourse of early China, sound is the alpha and omega of music in the Zhuangzian perspective. All kinds of sounds, both human and natural, are invited into musical discourse. Sound is regarded as the real source of our being moved by music, and therefore, musical consummation is depicted as embodiment through sound.

  14. Second sound scattering in superfluid helium

    International Nuclear Information System (INIS)

    Rosgen, T.

    1985-01-01

    Focusing cavities are used to study the scattering of second sound in liquid helium II. The special geometries reduce wall interference effects and allow measurements in very small test volumes. In a first experiment, a double elliptical cavity is used to focus a second sound wave onto a small wire target. A thin film bolometer measures the side scattered wave component. The agreement with a theoretical estimate is reasonable, although some problems arise from the small measurement volume and associated alignment requirements. A second cavity is based on confocal parabolas, thus enabling the use of large planar sensors. A cylindrical heater produces again a focused second sound wave. Three sensors monitor the transmitted wave component as well as the side scatter in two different directions. The side looking sensors have very high sensitivities due to their large size and resistance. Specially developed cryogenic amplifers are used to match them to the signal cables. In one case, a second auxiliary heater is used to set up a strong counterflow in the focal region. The second sound wave then scatters from the induced fluid disturbances

  15. Drama as a pedagogical tool for practicing death notification-experiences from Swedish medical students

    Directory of Open Access Journals (Sweden)

    Fjellman-Wiklund Anncristine

    2011-09-01

    Full Text Available Abstract Background One of the toughest tasks in any profession is the deliverance of death notification. Marathon Death is an exercise conducted during the fourth year of medical school in northern Sweden to prepare students for this responsibility. The exercise is designed to enable students to gain insight into the emotional and formal procedure of delivering death notifications. The exercise is inspired by Augusto Boal's work around Forum Theatre and is analyzed using video playback. The aim of the study was to explore reflections, attitudes and ideas toward training in delivering death notifications among medical students who participate in the Marathon Death exercise based on forum play. Methods After participation in the Marathon Death exercise, students completed semi-structured interviews. The transcribed interviews were analyzed using the principles of qualitative content analysis including a deductive content analysis approach with a structured matrix based on Bloom's taxonomy domains. Results The Marathon Death exercise was perceived as emotionally loaded, realistic and valuable for the future professional role as a physician. The deliverance of a death notification to the next of kin that a loved one has died was perceived as difficult. The exercise conjured emotions such as positive expectations and sheer anxiety. Students perceived participation in the exercise as an important learning experience, discovering that they had the capacity to manage such a difficult situation. The feedback from the video playback of the exercise and the feedback from fellow students and teachers enhanced the learning experience. Conclusions The exercise, Marathon Death, based on forum play with video playback is a useful pedagogical tool that enables students to practice delivering death notification. The ability to practice under realistic conditions contributes to reinforce students in preparation for their future professional role.

  16. Drama as a pedagogical tool for practicing death notification-experiences from Swedish medical students.

    Science.gov (United States)

    Nordström, Anna; Fjellman-Wiklund, Anncristine; Grysell, Tomas

    2011-09-28

    One of the toughest tasks in any profession is the deliverance of death notification. Marathon Death is an exercise conducted during the fourth year of medical school in northern Sweden to prepare students for this responsibility. The exercise is designed to enable students to gain insight into the emotional and formal procedure of delivering death notifications. The exercise is inspired by Augusto Boal's work around Forum Theatre and is analyzed using video playback. The aim of the study was to explore reflections, attitudes and ideas toward training in delivering death notifications among medical students who participate in the Marathon Death exercise based on forum play. After participation in the Marathon Death exercise, students completed semi-structured interviews. The transcribed interviews were analyzed using the principles of qualitative content analysis including a deductive content analysis approach with a structured matrix based on Bloom's taxonomy domains. The Marathon Death exercise was perceived as emotionally loaded, realistic and valuable for the future professional role as a physician. The deliverance of a death notification to the next of kin that a loved one has died was perceived as difficult. The exercise conjured emotions such as positive expectations and sheer anxiety. Students perceived participation in the exercise as an important learning experience, discovering that they had the capacity to manage such a difficult situation. The feedback from the video playback of the exercise and the feedback from fellow students and teachers enhanced the learning experience. The exercise, Marathon Death, based on forum play with video playback is a useful pedagogical tool that enables students to practice delivering death notification. The ability to practice under realistic conditions contributes to reinforce students in preparation for their future professional role.

  17. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  18. A Sounding Rocket Experiment for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    Science.gov (United States)

    Kubo, M.; Kano, R.; Kobayashi, K.; Bando, T.; Narukage, N.; Ishikawa, R.; Tsuneta, S.; Katsukawa, Y.; Ishikawa, S.; Suematsu, Y.; Hara, H.; Shimizu, T.; Sakao, T.; Ichimoto, K.; Goto, M.; Holloway, T.; Winebarger, A.; Cirtain, J.; De Pontieu, B.; Casini, R.; Auchère, F.; Trujillo Bueno, J.; Manso Sainz, R.; Belluzzi, L.; Asensio Ramos, A.; Štěpán, J.; Carlsson, M.

    2014-10-01

    A sounding-rocket experiment called the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is presently under development to measure the linear polarization profiles in the hydrogen Lyman-alpha (Lyα) line at 121.567 nm. CLASP is a vacuum-UV (VUV) spectropolarimeter to aim for first detection of the linear polarizations caused by scattering processes and the Hanle effect in the Lyα line with high accuracy (0.1%). This is a fist step for exploration of magnetic fields in the upper chromosphere and transition region of the Sun. Accurate measurements of the linear polarization signals caused by scattering processes and the Hanle effect in strong UV lines like Lyα are essential to explore with future solar telescopes the strength and structures of the magnetic field in the upper chromosphere and transition region of the Sun. The CLASP proposal has been accepted by NASA in 2012, and the flight is planned in 2015.

  19. The sounds of safety: stress and danger in music perception.

    Science.gov (United States)

    Schäfer, Thomas; Huron, David; Shanahan, Daniel; Sedlmeier, Peter

    2015-01-01

    As with any sensory input, music might be expected to incorporate the processing of information about the safety of the environment. Little research has been done on how such processing has evolved and how different kinds of sounds may affect the experience of certain environments. In this article, we investigate if music, as a form of auditory information, can trigger the experience of safety. We hypothesized that (1) there should be an optimal, subjectively preferred degree of information density of musical sounds, at which safety-related information can be processed optimally; (2) any deviation from the optimum, that is, both higher and lower levels of information density, should elicit experiences of higher stress and danger; and (3) in general, sonic scenarios with music should reduce experiences of stress and danger more than other scenarios. In Experiment 1, the information density of short music-like rhythmic stimuli was manipulated via their tempo. In an initial session, listeners adjusted the tempo of the stimuli to what they deemed an appropriate tempo. In an ensuing session, the same listeners judged their experienced stress and danger in response to the same stimuli, as well as stimuli exhibiting tempo variants. Results are consistent with the existence of an optimum information density for a given rhythm; the preferred tempo decreased for increasingly complex rhythms. The hypothesis that any deviation from the optimum would lead to experiences of higher stress and danger was only partly fit by the data. In Experiment 2, listeners should indicate their experience of stress and danger in response to different sonic scenarios: music, natural sounds, and silence. As expected, the music scenarios were associated with lowest stress and danger whereas both natural sounds and silence resulted in higher stress and danger. Overall, the results largely fit the hypothesis that music seemingly carries safety-related information about the environment.

  20. The Sound Quality of Cochlear Implants: Studies With Single-sided Deaf Patients.

    Science.gov (United States)

    Dorman, Michael F; Natale, Sarah Cook; Butts, Austin M; Zeitler, Daniel M; Carlson, Matthew L

    2017-09-01

    The goal of the present study was to assess the sound quality of a cochlear implant for single-sided deaf (SSD) patients fit with a cochlear implant (CI). One of the fundamental, unanswered questions in CI research is "what does an implant sound like?" Conventional CI patients must use the memory of a clean signal, often decades old, to judge the sound quality of their CIs. In contrast, SSD-CI patients can rate the similarity of a clean signal presented to the CI ear and candidate, CI-like signals presented to the ear with normal hearing. For Experiment 1 four types of stimuli were created for presentation to the normal hearing ear: noise vocoded signals, sine vocoded signals, frequency shifted, sine vocoded signals and band-pass filtered, natural speech signals. Listeners rated the similarity of these signals to unmodified signals sent to the CI on a scale of 0 to 10 with 10 being a complete match to the CI signal. For Experiment 2 multitrack signal mixing was used to create natural speech signals that varied along multiple dimensions. In Experiment 1 for eight adult SSD-CI listeners, the best median similarity rating to the sound of the CI for noise vocoded signals was 1.9; for sine vocoded signals 2.9; for frequency upshifted signals, 1.9; and for band pass filtered signals, 5.5. In Experiment 2 for three young listeners, combinations of band pass filtering and spectral smearing lead to ratings of 10. The sound quality of noise and sine vocoders does not generally correspond to the sound quality of cochlear implants fit to SSD patients. Our preliminary conclusion is that natural speech signals that have been muffled to one degree or another by band pass filtering and/or spectral smearing provide a close, but incomplete, match to CI sound quality for some patients.

  1. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  2. Bending sound in graphene: Origin and manifestation

    Energy Technology Data Exchange (ETDEWEB)

    Adamyan, V.M., E-mail: vadamyan@onu.edu.ua [Department of Theoretical Physics, Odessa I.I. Mechnikov National University, 2 Dvoryanska St., Odessa 65026 (Ukraine); Bondarev, V.N., E-mail: bondvic@onu.edu.ua [Department of Theoretical Physics, Odessa I.I. Mechnikov National University, 2 Dvoryanska St., Odessa 65026 (Ukraine); Zavalniuk, V.V., E-mail: vzavalnyuk@onu.edu.ua [Department of Theoretical Physics, Odessa I.I. Mechnikov National University, 2 Dvoryanska St., Odessa 65026 (Ukraine); Department of Fundamental Sciences, Odessa Military Academy, 10 Fontanska Road, Odessa 65009 (Ukraine)

    2016-11-11

    Highlights: • The origin of sound-like dispersion of graphene bending mode is disclosed. • The speed of graphene bending sound is determined. • The renormalized graphene bending rigidity is derived. • The intrinsic corrugations of graphene are estimated. - Abstract: It is proved that the acoustic-type dispersion of bending mode in graphene is generated by the fluctuation interaction between in-plane and out-of-plane terms in the free energy arising with account of non-linear components in the graphene strain tensor. In doing so we use an original adiabatic approximation based on the alleged (confirmed a posteriori) significant difference of sound speeds for in-plane and bending modes. The explicit expression for the bending sound speed depending only on the graphene mass density, in-plane elastic constants and temperature is deduced as well as the characteristics of the microscopic corrugations of graphene. The obtained results are in good quantitative agreement with the data of real experiments and computer simulations.

  3. Bending sound in graphene: Origin and manifestation

    International Nuclear Information System (INIS)

    Adamyan, V.M.; Bondarev, V.N.; Zavalniuk, V.V.

    2016-01-01

    Highlights: • The origin of sound-like dispersion of graphene bending mode is disclosed. • The speed of graphene bending sound is determined. • The renormalized graphene bending rigidity is derived. • The intrinsic corrugations of graphene are estimated. - Abstract: It is proved that the acoustic-type dispersion of bending mode in graphene is generated by the fluctuation interaction between in-plane and out-of-plane terms in the free energy arising with account of non-linear components in the graphene strain tensor. In doing so we use an original adiabatic approximation based on the alleged (confirmed a posteriori) significant difference of sound speeds for in-plane and bending modes. The explicit expression for the bending sound speed depending only on the graphene mass density, in-plane elastic constants and temperature is deduced as well as the characteristics of the microscopic corrugations of graphene. The obtained results are in good quantitative agreement with the data of real experiments and computer simulations.

  4. Generation of sound zones in 2.5 dimensions

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Olsen, Martin; Møller, Martin

    2011-01-01

    in a certain direction within a certain region of a room and at the same time suppress sound in another region. The method is examined through simulations and experiments. For comparison a simpler method based on the idea of maximising the ratio of the potential acoustic energy in an ensonified zone......Amethod for generating sound zones with different acoustic properties in a room is presented. The method is an extension of the two-dimensional multi-zone sound field synthesis technique recently developed by Wu and Abhayapala; the goal is, for example, to generate a plane wave that propagates...... to the potential acoustic energy in a quiet zone is also examined....

  5. Sound spectrum of a pulsating optical discharge

    Energy Technology Data Exchange (ETDEWEB)

    Grachev, G N; Smirnov, A L; Tishchenko, V N [Institute of Laser Physics, Siberian Branch, Russian Academy of Sciences, Novosibirsk (Russian Federation); Dmitriev, A K; Miroshnichenko, I B [Novosibirsk State Technical University (Russian Federation)

    2016-02-28

    A spectrum of sound of an optical discharge generated by a repetitively pulsed (RP) laser radiation has been investigated. The parameters of laser radiation are determined at which the spectrum of sound may contains either many lines, or the main line at the pulse repetition rate and several weaker overtones, or a single line. The spectrum of sound produced by trains of RP radiation comprises the line (and overtones) at the repetition rate of train sequences and the line at the repetition rate of pulses in trains. A CO{sub 2} laser with the pulse repetition rate of f ≈ 3 – 180 kHz and the average power of up to 2 W was used in the experiments. (optical discharges)

  6. Plaatsafhankelijkheid van timbre bij nagalm (Place dependence of timbre in reverberant sound fields)

    NARCIS (Netherlands)

    Plomp, R.; Steeneken, H.J.M.

    1973-01-01

    The sound-pressure level of a simple tone in a diffuse sound field varies from point to point with a theoretical standard deviation of 5.57 dB. This variability affects the timbre of complex tones in reverberant sound fields, Experiments have shown that the timbre dissimilarity at any two positions

  7. Referential calls coordinate multi-species mobbing in a forest bird community.

    Science.gov (United States)

    Suzuki, Toshitaka N

    2016-01-01

    Japanese great tits ( Parus minor ) use a sophisticated system of anti-predator communication when defending their offspring: they produce different mobbing calls for different nest predators (snake versus non-snake predators) and thereby convey this information to conspecifics (i.e. functionally referential call system). The present playback experiments revealed that these calls also serve to coordinate multi-species mobbing at nests; snake-specific mobbing calls attracted heterospecific individuals close to the sound source and elicited snake-searching behaviour, whereas non-snake mobbing calls attracted these birds at a distance. This study demonstrates for the first time that referential mobbing calls trigger different formations of multi-species mobbing parties.

  8. Air conducted and body conducted sound produced by own voice

    DEFF Research Database (Denmark)

    Hansen, Mie Østergaard

    1998-01-01

    When we speak, sound reaches our ears both through the air, from the mouth to ear, and through our body, as vibrations. The ratio between the air borne and body conducted sound has been studied in a pilot experiment where the air borne sound was eliminated by isolating the ear with a large...... attenuation box. The ratio was found to lie between -15 dB to -7 dB, below 1 kHz, comparable with theoretical estimations. This work is part of a broader study of the occlusion effect and the results provide important input data for modelling the sound pressure change between an open and an occluded ear canal....

  9. Sound Stories for General Music

    Science.gov (United States)

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  10. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  11. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  12. Effect of pile-driving sounds on the survival of larval fish

    NARCIS (Netherlands)

    Bolle, L.J.; Jong, C.A.F. de; Bierman, S.M.; Beek, P.J.G. van van; Wessels, P.W.; Blom, E.; Damme, C.J.G. van; Winter, H.V.; Dekeling, R.P.A.

    2016-01-01

    Concern exists about the potential effects of pile-driving sounds on fish, but evidence is limited, especially for fish larvae. A device was developed to expose larvae to accurately reproduced pile-driving sounds. Controlled exposure experiments were carried out to examine the lethal effects in

  13. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  14. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  15. Assessment and improvement of sound quality in cochlear implant users.

    Science.gov (United States)

    Caldwell, Meredith T; Jiam, Nicole T; Limb, Charles J

    2017-06-01

    Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant-mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI-MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. NA.

  16. Framing sound: Using expectations to reduce environmental noise annoyance.

    Science.gov (United States)

    Crichton, Fiona; Dodd, George; Schmid, Gian; Petrie, Keith J

    2015-10-01

    Annoyance reactions to environmental noise, such as wind turbine sound, have public health implications given associations between annoyance and symptoms related to psychological distress. In the case of wind farms, factors contributing to noise annoyance have been theorised to include wind turbine sound characteristics, the noise sensitivity of residents, and contextual aspects, such as receiving information creating negative expectations about sound exposure. The experimental aim was to assess whether receiving positive or negative expectations about wind farm sound would differentially influence annoyance reactions during exposure to wind farm sound, and also influence associations between perceived noise sensitivity and noise annoyance. Sixty volunteers were randomly assigned to receive either negative or positive expectations about wind farm sound. Participants in the negative expectation group viewed a presentation which incorporated internet material indicating that exposure to wind turbine sound, particularly infrasound, might present a health risk. Positive expectation participants viewed a DVD which framed wind farm sound positively and included internet information about the health benefits of infrasound exposure. Participants were then simultaneously exposed to sub-audible infrasound and audible wind farm sound during two 7 min exposure sessions, during which they assessed their experience of annoyance. Positive expectation participants were significantly less annoyed than negative expectation participants, while noise sensitivity only predicted annoyance in the negative group. Findings suggest accessing negative information about sound is likely to trigger annoyance, particularly in noise sensitive people and, importantly, portraying sound positively may reduce annoyance reactions, even in noise sensitive individuals. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  18. Serial recall of rhythms and verbal sequences: Impacts of concurrent tasks and irrelevant sound.

    Science.gov (United States)

    Hall, Debbora; Gathercole, Susan E

    2011-08-01

    Rhythmic grouping enhances verbal serial recall, yet very little is known about memory for rhythmic patterns. The aim of this study was to compare the cognitive processes supporting memory for rhythmic and verbal sequences using a range of concurrent tasks and irrelevant sounds. In Experiment 1, both concurrent articulation and paced finger tapping during presentation and during a retention interval impaired rhythm recall, while letter recall was only impaired by concurrent articulation. In Experiments 2 and 3, irrelevant sound consisted of irrelevant speech or tones, changing-state or steady-state sound, and syncopated or paced sound during presentation and during a retention interval. Irrelevant speech was more damaging to rhythm and letter recall than was irrelevant tone sound, but there was no effect of changing state on rhythm recall, while letter recall accuracy was disrupted by changing-state sound. Pacing of sound did not consistently affect either rhythm or letter recall. There are similarities in the way speech and rhythms are processed that appear to extend beyond reliance on temporal coding mechanisms involved in serial-order recall.

  19. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  20. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  1. Avatar Weight Estimates based on Footstep Sounds in Three Presentation Formats

    DEFF Research Database (Denmark)

    Sikström, Erik; Götzen, Amalia De; Serafin, Stefania

    2015-01-01

    When evaluating a sound design for virtual environment, the context where it is to be implemented in may have an influence on how it may be perceived. In this paper we perform an experiment comparing three presentation formats (audio only, video with audio and an interactive immersive VR format......) and their influences on a sound design evaluation task concerning footstep sounds. The evaluation involved estimating the perceived weight of a virtual avatar seen from a first person perspective, as well as the suitability of the sound effect relative to the context. The results show significant differences for three...

  2. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  3. Analysis of Respiratory Sounds: State of the Art

    Directory of Open Access Journals (Sweden)

    Sandra Reichert

    2008-01-01

    Full Text Available Objective This paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds. Methods and material Review of the current medical and technological literature using Pubmed and personal experience. Results The study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms… Conclusion The next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.

  4. Cultivating a Troubled Consciousness: Compulsory sound-mindedness and complicity in oppression

    Directory of Open Access Journals (Sweden)

    C. Chapman

    2013-11-01

    Full Text Available Implicating oneself in oppression provokes uncertainty, shame and anxiety, and identity destabilizations. Yet anti-oppressive texts often denigrate these experiences, participating in forces I call “compulsory sound-mindedness.” Narratives of three women confronting their complicity illustrate the workings of compulsory sound-mindedness: a white Canadian recognizing the racism in her development work and both a white woman and a racialized Muslim reflecting on their complicity in ongoing Canadian colonization. The three narratives devalue affect, uncertainty, and destabilized identity. They also reveal these denigrated experiences as fundamental to personal-is-political ethical transformation. Compulsory sound-mindedness cannot consistently prevent people from journeying with pain, uncertainty, and coming undone. But when people undertake such journeys, compulsory sound-mindedness frames pain, identity destabilization, and uncertainty as regrettable and without value. I advocate that people cultivate a “troubled consciousness” by journeying with internalized accountability narratives, uncertainties, painful feelings, and destabilizations of a straightforwardly moral self.

  5. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  6. Consort 1 sounding rocket flight

    Science.gov (United States)

    Wessling, Francis C.; Maybee, George W.

    1989-01-01

    This paper describes a payload of six experiments developed for a 7-min microgravity flight aboard a sounding rocket Consort 1, in order to investigate the effects of low gravity on certain material processes. The experiments in question were designed to test the effect of microgravity on the demixing of aqueous polymer two-phase systems, the electrodeposition process, the production of elastomer-modified epoxy resins, the foam formation process and the characteristics of foam, the material dispersion, and metal sintering. The apparatuses designed for these experiments are examined, and the rocket-payload integration and operations are discussed.

  7. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  8. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  9. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  10. Research on the application of active sound barriers for the transformer noise abatement

    Directory of Open Access Journals (Sweden)

    Hu Sheng

    2016-01-01

    Full Text Available Sound barriers are a type of measure most commonly used in the noise abatement of transformers. In the noise abatement project of substations, the design of sound barriers is restrained by the portal frames which are used to hold up outgoing lines from the main transformers, which impacts the noise reduction effect. If active sound barriers are utilized in these places, the noise diffraction of sound barriers can be effectively reduced. At a 110kV Substation, an experiment using a 15-channel active sound barrier has been carried out. The result of the experiment shows that the mean noise reduction value (MNRV of the noise measuring points at the substation boundary are 1.5 dB (A. The effect of the active noise control system is impacted by the layout of the active noise control system, the acoustic environment on site and the spectral characteristic of the target area.

  11. Influence of visual stimuli on the sound quality evaluation of loudspeaker systems

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros; Christensen, Flemming

    Product sound quality evaluation aims to identify relevant attributes and assess their influence on the overall auditory impression. Extending this sound specific rationale, the present study evaluates overall impression in relation to hearing and vision, specifically for loudspeakers. In order...... to quantify the bias that the image of a loudspeaker has on the sound quality evaluation of a naive listening panel, loudspeaker sounds of varied degradation are coupled with positively or negatively biasing visual input of actual loudspeakers, and in a separate experiment by pictures of the same loudspeakers....

  12. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  13. Design and performance of an experiment for the investigation of open capillary channel flows. Sounding rocket experiment TEXUS-41

    Energy Technology Data Exchange (ETDEWEB)

    Rosendahl, Uwe; Dreyer, Michael E. [University of Bremen, Sounding Rocket Experiment TEXUS-41 Center of Applied Space Technology and Microgravity (ZARM), Bremen (Germany)

    2007-05-15

    In this paper we report on the set-up and the performance of an experiment for the investigation of flow-rate limitations in open capillary channels under low-gravity conditions (microgravity). The channels consist of two parallel plates bounded by free liquid surfaces along the open sides. In the case of steady flow the capillary pressure of the free surface balances the differential pressure between the liquid and the surrounding constant-pressure gas phase. A maximum flow rate is achieved when the adjusted volumetric flow rate exceeds a certain limit leading to a collapse of the free surfaces. The flow is convective (inertia) dominated, since the viscous forces are negligibly small compared to the convective forces. In order to investigate this type of flow an experiment aboard the sounding rocket TEXUS-41 was performed. The aim of the investigation was to achieve the profiles of the free liquid surfaces and to determine the maximum flow rate of the steady flow. For this purpose a new approach to the critical flow condition by enlarging the channel length was applied. The paper is focussed on the technical details of the experiment and gives a review of the set-up, the preparation of the flight procedures and the performance. Additionally the typical appearance of the flow indicated by the surface profiles is presented as a basis for a separate continuative discussion of the experimental results. (orig.)

  14. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  15. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  16. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  17. Mobbing call experiment suggests the enhancement of forest bird movement by tree cover in urban landscapes across seasons

    Directory of Open Access Journals (Sweden)

    Atsushi Shimazaki

    2017-06-01

    Full Text Available Local scale movement behavior is an important basis to predict large-scale bird movements in heterogeneous landscapes. Here we conducted playback experiments using mobbing calls to estimate the probability that forest birds would cross a 50-m urban area during three seasons (breeding, dispersal, and wintering seasons with varying amounts of tree cover, building area, and electric wire density. We examined the responses of four forest resident species: Marsh Tit (Poecile palustris, Varied Tit (Sittiparus varius, Japanese Tit (P. minor, and Eurasian Nuthatch (Sitta europaea in central Hokkaido, northern Japan. We carried out and analyzed 250 playback experiments that attracted 618 individuals. Our results showed that tree cover increased the crossing probability of three species other than Varied Tit. Building area and electric wire density had no detectable effect on crossing probability for four species. Seasonal difference in the crossing probability was found only for Varied Tit, and the probability was the highest in the breeding season. These results suggest that the positive effect of tree cover on the crossing probability would be consistent across seasons. We therefore conclude that planting trees would be an effective way to promote forest bird movement within an urban landscape.

  18. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....

  19. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  20. Three integrated photovoltaic/sound barrier power plants. Construction and operational experience; Drei integrierte PV-Schallschutz Versuchsfelder. Bau und Erprobung

    Energy Technology Data Exchange (ETDEWEB)

    Nordmann, T.; Froelich, A.; Clavadetscher, L.

    2002-07-01

    After an international ideas competition by TNC Switzerland and Germany in 1996, six companies where given the opportunity to construct a prototype of their newly developed integrated PV-sound barrier concepts. The main goal was to develop highly integrated concepts, allowing the reduction of PV sound barrier systems costs, as well as the demonstration of specific concepts for different noise situations. This project is strongly correlated with a German project. Three of the concepts of the competition are demonstrated along a highway near Munich, constructed in 1997. The three Swiss installations had to be constructed at different locations, reflecting three typical situations for sound barriers. The first Swiss installation was the world first Bi-facial PV-sound barrier. It was built on a highway bridge at Wallisellen-Aubrugg in 1997. The operational experience of the installation is positive. But due to the different efficiencies of the two cell sides, its specific yield lies somewhat behind a conventional PV installation. The second Swiss plant was finished in autumn 1998. The 'zig-zag' construction is situated along the railway line at Wallisellen in a densely inhabited area with some local shadowing. Its performance and its specific yield is comparatively low due to a combination of several reasons (geometry of the concept, inverter, high module temperature, local shadows). The third installation was constructed along the motor way A1 at Bruettisellen in 1999. Its vertical panels are equipped with amorphous modules. The report show, that the performance of the system is reasonable, but the mechanical construction has to be improved. A small trial field with cells directly laminated onto the steel panel, also installed at Bruettisellen, could be the key development for this concept. This final report includes the evaluation and comparison of the monitored data in the past 24 months of operation. (author)

  1. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  2. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  3. 10 Hz Amplitude Modulated Sounds Induce Short-Term Tinnitus Suppression

    Directory of Open Access Journals (Sweden)

    Patrick Neff

    2017-05-01

    Full Text Available Objectives: Acoustic stimulation or sound therapy is proposed as a main treatment option for chronic subjective tinnitus. To further probe the field of acoustic stimulations for tinnitus therapy, this exploratory study compared 10 Hz amplitude modulated (AM sounds (two pure tones, noise, music, and frequency modulated (FM sounds and unmodulated sounds (pure tone, noise regarding their temporary suppression of tinnitus loudness. First, it was hypothesized that modulated sounds elicit larger temporary loudness suppression (residual inhibition than unmodulated sounds. Second, with manipulation of stimulus loudness and duration of the modulated sounds weaker or stronger effects of loudness suppression were expected, respectively.Methods: We recruited 29 participants with chronic tonal tinnitus from the multidisciplinary Tinnitus Clinic of the University of Regensburg. Participants underwent audiometric, psychometric and tinnitus pitch matching assessments followed by an acoustic stimulation experiment with a tinnitus loudness growth paradigm. In a first block participants were stimulated with all of the sounds for 3 min each and rated their subjective tinnitus loudness to the pre-stimulus loudness every 30 s after stimulus offset. The same procedure was deployed in the second block with the pure tone AM stimuli matched to the tinnitus frequency, manipulated in length (6 min, and loudness (reduced by 30 dB and linear fade out. Repeated measures mixed model analyses of variance (ANOVA were calculated to assess differences in loudness growth between the stimuli for each block separately.Results: First, we found that all sounds elicit a short-term suppression of tinnitus loudness (seconds to minutes with strongest suppression right after stimulus offset [F(6, 1331 = 3.74, p < 0.01]. Second, similar to previous findings we found that AM sounds near the tinnitus frequency produce significantly stronger tinnitus loudness suppression than noise [vs. Pink

  4. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  5. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  6. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  7. Sound transmission in porcine thorax through airway insonification.

    Science.gov (United States)

    Peng, Ying; Dai, Zoujun; Mansy, Hansen A; Henry, Brian M; Sandler, Richard H; Balk, Robert A; Royston, Thomas J

    2016-04-01

    Many pulmonary injuries and pathologies may lead to structural and functional changes in the lungs resulting in measurable sound transmission changes on the chest surface. Additionally, noninvasive imaging of externally driven mechanical wave motion in the chest (e.g., using magnetic resonance elastography) can provide information about lung structural property changes and, hence, may be of diagnostic value. In the present study, a comprehensive computational simulation (in silico) model was developed to simulate sound wave propagation in the airways, lung, and chest wall under normal and pneumothorax conditions. Experiments were carried out to validate the model. Here, sound waves with frequency content from 50 to 700 Hz were introduced into airways of five porcine subjects via an endotracheal tube, and transmitted waves were measured by scanning laser Doppler vibrometry at the chest wall surface. The computational model predictions of decreased sound transmission with pneumothorax were consistent with experimental measurements. The in silico model can also be used to visualize wave propagation inside and on the chest wall surface for other pulmonary pathologies, which may help in developing and interpreting diagnostic procedures that utilize sound and vibration.

  8. Sound transmission in porcine thorax through airway insonification

    Science.gov (United States)

    Dai, Zoujun; Mansy, Hansen A.; Henry, Brian M.; Sandler, Richard H.; Balk, Robert A.; Royston, Thomas J.

    2015-01-01

    Many pulmonary injuries and pathologies may lead to structural and functional changes in the lungs resulting in measurable sound transmission changes on the chest surface. Additionally, noninvasive imaging of externally driven mechanical wave motion in the chest (e.g., using magnetic resonance elastography) can provide information about lung structural property changes and, hence, may be of diagnostic value. In the present study, a comprehensive computational simulation (in silico) model was developed to simulate sound wave propagation in the airways, lung, and chest wall under normal and pneumothorax conditions. Experiments were carried out to validate the model. Here, sound waves with frequency content from 50 to 700 Hz were introduced into airways of five porcine subjects via an endotracheal tube, and transmitted waves were measured by scanning laser Doppler vibrometry at the chest wall surface. The computational model predictions of decreased sound transmission with pneumothorax were consistent with experimental measurements. The in silico model can also be used to visualize wave propagation inside and on the chest wall surface for other pulmonary pathologies, which may help in developing and interpreting diagnostic procedures that utilize sound and vibration. PMID:26280512

  9. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  10. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  11. A review of research progress in air-to-water sound transmission

    International Nuclear Information System (INIS)

    Peng Zhao-Hui; Zhang Ling-Shan

    2016-01-01

    International and domestic research progress in theory and experiment and applications of the air-to-water sound transmission are presented in this paper. Four classical numerical methods of calculating the underwater sound field generated by an airborne source, i.e., the ray theory, the wave solution, the normal-mode theory and the wavenumber integration approach, are introduced. Effects of two special conditions, i.e., the moving airborne source or medium and the rough air-water interface, on the air-to-water sound transmission are reviewed. In experimental studies, the depth and range distributions of the underwater sound field created by different kinds of airborne sources in near-field and far-field, the longitudinal horizontal correlation of underwater sound field and application methods for inverse problems are reviewed. (special topic)

  12. Surface Deformation by Thermo-capillary Convection -Sounding Rocket COMPERE Experiment SOURCE

    Science.gov (United States)

    Fuhrmann, Eckart; Dreyer, Michael E.

    The sounding rocket COMPERE experiment SOURCE was successfully flown on MASER 11, launched in Kiruna (ESRANGE), May 15th, 2008. SOURCE has been intended to partly ful-fill the scientific objectives of the European Space Agency (ESA) Microgravity Applications Program (MAP) project AO-2004-111 (Convective boiling and condensation). Three parties of principle investigators have been involved to design the experiment set-up: ZARM for thermo-capillary flows, IMFT (Toulouse, France) for boiling studies, EADS Astrium (Bremen, Ger-many) for depressurization. The scientific aims are to study the effect of wall heat flux on the contact line of the free liquid surface and to obtain a correlation for a convective heat transfer coefficient. The experiment has been conducted along a predefined time line. A preheating sequence at ground was the first operation to achieve a well defined temperature evolution within the test cell and its environment inside the rocket. Nearly one minute after launch, the pressurized test cell was filled with the test liquid HFE-7000 until a certain fill level was reached. Then the free surface could be observed for 120 s without distortion. Afterwards, the first depressurization was started to induce subcooled boiling, the second one to start saturated boiling. The data from the flight consists of video images and temperature measurements in the liquid, the solid, and the gaseous phase. Data analysis provides the surface shape versus time and the corresponding apparent contact angle. Computational analysis provides information for the determination of the heat transfer coefficient in a compensated gravity environment where a flow is caused by the temperature difference between the hot wall and the cold liquid. Correlations for the effective contact angle and the heat transfer coefficient shall be delivered as a function of the relevant dimensionsless parameters. The data will be used for benchmarking of commercial CFD codes and the tank design

  13. Heat Transfer by Thermo-Capillary Convection. Sounding Rocket COMPERE Experiment SOURCE

    Science.gov (United States)

    Fuhrmann, Eckart; Dreyer, Michael

    2009-08-01

    This paper describes the results of a sounding rocket experiment which was partly dedicated to study the heat transfer from a hot wall to a cold liquid with a free surface. Natural or buoyancy-driven convection does not occur in the compensated gravity environment of a ballistic phase. Thermo-capillary convection driven by a temperature gradient along the free surface always occurs if a non-condensable gas is present. This convection increases the heat transfer compared to a pure conductive case. Heat transfer correlations are needed to predict temperature distributions in the tanks of cryogenic upper stages. Future upper stages of the European Ariane V rocket have mission scenarios with multiple ballistic phases. The aims of this paper and of the COMPERE group (French-German research group on propellant behavior in rocket tanks) in general are to provide basic knowledge, correlations and computer models to predict the thermo-fluid behavior of cryogenic propellants for future mission scenarios. Temperature and surface location data from the flight have been compared with numerical calculations to get the heat flux from the wall to the liquid. Since the heat flux measurements along the walls of the transparent test cell were not possible, the analysis of the heat transfer coefficient relies therefore on the numerical modeling which was validated with the flight data. The coincidence between experiment and simulation is fairly good and allows presenting the data in form of a Nusselt number which depends on a characteristic Reynolds number and the Prandtl number. The results are useful for further benchmarking of Computational Fluid Dynamics (CFD) codes such as FLOW-3D and FLUENT, and for the design of future upper stage propellant tanks.

  14. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  15. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  16. Tinnitus retraining therapy for patients with tinnitus and decreased sound tolerance.

    Science.gov (United States)

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2003-04-01

    Our experience has revealed the following: (1) TRT is applicable for all types of tinnitus, as well as for decreased sound tolerance, with significant improvement of tinnitus occurring in over 80% of the cases, and at least equal success rate for decreased sound tolerance. (2) TRT can provide cure for decreased sound tolerance. (3) TRT does not require frequent clinic visits and has no side effects; however, (4) Special training of health providers involved in this treatment is required for this treatment to be effective.

  17. A Comparison of Wavetable and FM Data Reduction Methods for Resynthesis of Musical Sounds

    Science.gov (United States)

    Horner, Andrew

    An ideal music-synthesis technique provides both high-level spectral control and efficient computation. Simple playback of recorded samples lacks spectral control, while additive sine-wave synthesis is inefficient. Wavetable and frequencymodulation synthesis, however, are two popular synthesis techniques that are very efficient and use only a few control parameters.

  18. Soundscapes and Larval Settlement: Larval Bivalve Responses to Habitat-Associated Underwater Sounds.

    Science.gov (United States)

    Eggleston, David B; Lillis, Ashlee; Bohnenstiehl, DelWayne R

    2016-01-01

    We quantified the effects of habitat-associated sounds on the settlement response of two species of bivalves with contrasting habitat preferences: (1) Crassostrea virginicia (oyster), which prefers to settle on other oysters, and (2) Mercenaria mercenaria (clam), which settles on unstructured habitats. Oyster larval settlement in the laboratory was significantly higher when exposed to oyster reef sound compared with either off-reef or no-sound treatments. Clam larval settlement did not vary according to sound treatments. Similar to laboratory results, field experiments showed that oyster larval settlement in "larval housings" suspended above oyster reefs was significantly higher compared with off-reef sites.

  19. Sound absorption of a new oblique-section acoustic metamaterial with nested resonator

    Science.gov (United States)

    Gao, Nansha; Hou, Hong; Zhang, Yanni; Wu, Jiu Hui

    2018-02-01

    This study designs and investigates high-efficiency sound absorption of new oblique-section nested resonators. Impedance tube experiment results show that different combinations of oblique-section nest resonators have tunable low-frequency bandwidth characteristics. The sound absorption mechanism is due to air friction losses in the slotted region and the sample structure resonance. The acousto-electric analogy model demonstrates that the sound absorption peak and bandwidth can be modulated over an even wider frequency range by changing the geometric size and combinations of structures. The proposed structure can be easily fabricated and used in low-frequency sound absorption applications.

  20. DC Electric Field measurement in the Mid-latitude Ionosphere during MSTID by S-520-27 Sounding Rocket Experiments

    Science.gov (United States)

    Ishisaka, K.; Yamamoto, M.; Yokoyama, T.; Tanaka, M.; Abe, T.; Kumamoto, A.

    2015-12-01

    In the middle latitude ionospheric F region, mainly in summer, wave structures of electron density that have wave length of 100-200 km and period of one hour are observed. This phenomena is called Medium Scale Traveling Ionosphiric Disturbance; MSTID. MSTID has been observed by GPS receiving network, and its characteristic were studied. In the past, MSTID was thought to be generated by the Perkins instability, but its growth ratio was too small to be effective so far smaller than the real. Recently coupling process between ionospheric E and F regions are studied by using two radars and by computer simulations. Through these studies, we now have hypothesis that MSTID is generated by the combination of E-F region coupling and Perkins instability. The S-520-27 sounding rocket experiment on E-layer and F-layer was planned in order to verify this hypothesis. S-520-27 sounding rocket was launched at 23:57 JST on 20th July, 2013 from JAXA Uchinoura Space Center. S-520-27 sounding rocket reached 316km height. The S-520-27 payload was equipped with Electric Field Detector (EFD) with a two set of orthogonal double probes to measure DC electric field in the spin plane of the payload. The electrodes of two double probe antennas were used to gather the potentials which were detected with high impedance pre-amplifier using the floating (unbiased) double probe technique. As a results of measurements of DC electric fields by the EFD, the natural electric field was about +/-5mV/m, and varied the direction from southeast to east. Then the electric field was mapped to the horizontal plane at 280km height along the geomagnetic field line. In this presentation, we show the detail result of DC electric field measurement by S-520-27 sounding rocket and then we discuss about the correlation between the natural electric field and TEC variation by using the GPS-TEC.

  1. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  2. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    Science.gov (United States)

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  3. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  4. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  5. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  6. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    Directory of Open Access Journals (Sweden)

    Nordahl Rolf

    2010-01-01

    Full Text Available We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users' actions, while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden. A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions, including both uni- and bimodal stimuli (auditory and visual. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show that subjects' motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment.

  7. Active control of radiated sound power from a baffled, rectangular panel

    DEFF Research Database (Denmark)

    Mørkholt, Jakob

    1996-01-01

    with an array of eleven microphones in front of the panel, is very close to minimising the actual radiated sound power. Practical experiments where such an array estimate has been minimised using the filtered X LMS algorithm have shown that substantial reductions of radiated sound power can be obtained over......Active control of radiated sound power from a rectangular baffled panel by minimisation of an accurate power estimate, using piezoceramic actuators, has been investigated. Computer simulations have shown that minimising a power estimate obtained by discretised integration of the far field intensity...... a broad frequency range using few piezoceramic actuators, provided that an accurate estimate of the sound power is available for minimisation....

  8. Source Separation of Heartbeat Sounds for Effective E-Auscultation

    Science.gov (United States)

    Geethu, R. S.; Krishnakumar, M.; Pramod, K. V.; George, Sudhish N.

    2016-03-01

    This paper proposes a cost effective solution for improving the effectiveness of e-auscultation. Auscultation is the most difficult skill for a doctor, since it can be acquired only through experience. The heart sound mixtures are captured by placing the four numbers of sensors at appropriate auscultation area in the body. These sound mixtures are separated to its relevant components by a statistical method independent component analysis. The separated heartbeat sounds can be further processed or can be stored for future reference. This idea can be used for making a low cost, easy to use portable instrument which will be beneficial to people living in remote areas and are unable to take the advantage of advanced diagnosis methods.

  9. Sound field separation with cross measurement surfaces.

    Directory of Open Access Journals (Sweden)

    Jin Mao

    Full Text Available With conventional near-field acoustical holography, it is impossible to identify sound pressure when the coherent sound sources are located on the same side of the array. This paper proposes a solution, using cross measurement surfaces to separate the sources based on the equivalent source method. Each equivalent source surface is built in the center of the corresponding original source with a spherical surface. According to the different transfer matrices between equivalent sources and points on holographic surfaces, the weighting of each equivalent source from coherent sources can be obtained. Numerical and experimental studies have been performed to test the method. For the sound pressure including noise after separation in the experiment, the calculation accuracy can be improved by reconstructing the pressure with Tikhonov regularization and the L-curve method. On the whole, a single source can be effectively separated from coherent sources using cross measurement.

  10. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  11. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  12. A Smart Audio on Demand Application on Android Systems

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2015-05-01

    Full Text Available This paper describes a study of the realization of intelligent Audio on Demand (AOD processing in the embedded system environment. This study describes the development of innovative Android software that will enhance user experience of the increasingly popular number of smart mobile devices now available on the market. The application we developed can accumulate records of the songs that are played and automatically analyze the favorite song types of a user. The application can also select sound control playback functions to make operation more convenient. A large number of different types of music genre were collected to create a sound database and build an intelligent AOD processing mechanism. Formant analysis was used to extract voice features and the K-means clustering method and acoustic modeling technology of the Gaussian mixture model (GMM were used to study and develop the application mechanism. The processes we developed run smoothly in the embedded Android platform.

  13. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  14. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  15. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  16. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  17. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  18. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection

    Directory of Open Access Journals (Sweden)

    Shih-Hong Li

    2017-01-01

    Full Text Available In the clinic, the wheezing sound is usually considered as an indicator symptom to reflect the degree of airway obstruction. The auscultation approach is the most common way to diagnose wheezing sounds, but it subjectively depends on the experience of the physician. Several previous studies attempted to extract the features of breathing sounds to detect wheezing sounds automatically. However, there is still a lack of suitable monitoring systems for real-time wheeze detection in daily life. In this study, a wearable and wireless breathing sound monitoring system for real-time wheeze detection was proposed. Moreover, a breathing sounds analysis algorithm was designed to continuously extract and analyze the features of breathing sounds to provide the objectively quantitative information of breathing sounds to professional physicians. Here, normalized spectral integration (NSI was also designed and applied in wheeze detection. The proposed algorithm required only short-term data of breathing sounds and lower computational complexity to perform real-time wheeze detection, and is suitable to be implemented in a commercial portable device, which contains relatively low computing power and memory. From the experimental results, the proposed system could provide good performance on wheeze detection exactly and might be a useful assisting tool for analysis of breathing sounds in clinical diagnosis.

  19. Sounding rockets explore the ionosphere

    International Nuclear Information System (INIS)

    Mendillo, M.

    1990-01-01

    It is suggested that small, expendable, solid-fuel rockets used to explore ionospheric plasma can offer insight into all the processes and complexities common to space plasma. NASA's sounding rocket program for ionospheric research focuses on the flight of instruments to measure parameters governing the natural state of the ionosphere. Parameters include input functions, such as photons, particles, and composition of the neutral atmosphere; resultant structures, such as electron and ion densities, temperatures and drifts; and emerging signals such as photons and electric and magnetic fields. Systematic study of the aurora is also conducted by these rockets, allowing sampling at relatively high spatial and temporal rates as well as investigation of parameters, such as energetic particle fluxes, not accessible to ground based systems. Recent active experiments in the ionosphere are discussed, and future sounding rocket missions are cited

  20. A data-assimilative ocean forecasting system for the Prince William sound and an evaluation of its performance during sound Predictions 2009

    Science.gov (United States)

    Farrara, John D.; Chao, Yi; Li, Zhijin; Wang, Xiaochun; Jin, Xin; Zhang, Hongchun; Li, Peggy; Vu, Quoc; Olsson, Peter Q.; Schoch, G. Carl; Halverson, Mark; Moline, Mark A.; Ohlmann, Carter; Johnson, Mark; McWilliams, James C.; Colas, Francois A.

    2013-07-01

    The development and implementation of a three-dimensional ocean modeling system for the Prince William Sound (PWS) is described. The system consists of a regional ocean model component (ROMS) forced by output from a regional atmospheric model component (the Weather Research and Forecasting Model, WRF). The ROMS ocean model component has a horizontal resolution of 1km within PWS and utilizes a recently-developed multi-scale 3DVAR data assimilation methodology along with freshwater runoff from land obtained via real-time execution of a digital elevation model. During the Sound Predictions Field Experiment (July 19-August 3, 2009) the system was run in real-time to support operations and incorporated all available real-time streams of data. Nowcasts were produced every 6h and a 48-h forecast was performed once a day. In addition, a sixteen-member ensemble of forecasts was executed on most days. All results were published at a web portal (http://ourocean.jpl.nasa.gov/PWS) in real time to support decision making.The performance of the system during Sound Predictions 2009 is evaluated. The ROMS results are first compared with the assimilated data as a consistency check. RMS differences of about 0.7°C were found between the ROMS temperatures and the observed vertical profiles of temperature that are assimilated. The ROMS salinities show greater discrepancies, tending to be too salty near the surface. The overall circulation patterns observed throughout the Sound are qualitatively reproduced, including the following evolution in time. During the first week of the experiment, the weather was quite stormy with strong southeasterly winds. This resulted in strong north to northwestward surface flow in much of the central PWS. Both the observed drifter trajectories and the ROMS nowcasts showed strong surface inflow into the Sound through the Hinchinbrook Entrance and strong generally northward to northwestward flow in the central Sound that was exiting through the Knight

  1. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  2. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  3. Concepts for evaluation of sound insulation of dwellings - from chaos to consensus?

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2005-01-01

    Legal sound insulation requirements have existed more than 50 years in some countries, and single-number quantities for evaluation of sound insulation have existed nearly as long time. However, the concepts have changed considerably over time from simple arithmetic averaging of frequency bands......¬ments and classification schemes revealed significant differences of concepts. The paper summarizes the history of concepts, the disadvantages of the present chaos and the benefits of consensus concerning concepts for airborne and impact sound insulation between dwellings and airborne sound insulation of facades...... with a trend towards light-weight constructions are contradictory and challenging. This calls for exchange of data and experience, implying a need for harmonized concepts, including use of spectrum adaptation terms. The paper will provide input for future discussions in EAA TC-RBA WG4: "Sound insulation...

  4. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  5. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  6. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  7. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  8. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  9. Persistent flow and third-sound waves in the He-II film

    International Nuclear Information System (INIS)

    Verbeek, H.J.

    1980-01-01

    The author describes experiments performed on persistent film-flow in He-II film. Data obtained using the third-sound technique is presented. The experiments demonstrate unequivocally the reality of persistent currents in the He-II film. (Auth.)

  10. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  11. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  12. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  13. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  14. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  15. The German scientific balloon and sounding rocket projects

    International Nuclear Information System (INIS)

    Dalh, A.F.

    1978-01-01

    This report contains information on the sounding rocket projects: experiment preparation for spacelab (astronomy), aeronomy, magnetosphere, and material science. Except for material science the scientific balloon projects are performed in the some scientific fields, but with a strong emphasis on astronomical research. It is tried to provide by means of tables a survey as complete as possible of the projects for the time since the last symposium in Elmau and of the plans for the future until 1981. The scientific balloon and sounding rocket projects form a small succesful part of the German space research programme. (author)

  16. Decreased sound tolerance: hyperacusis, misophonia, diplacousis, and polyacousis.

    Science.gov (United States)

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2015-01-01

    Definitions, potential mechanisms, and treatments for decreased sound tolerance, hyperacusis, misophonia, and diplacousis are presented with an emphasis on the associated physiologic and neurophysiological processes and principles. A distinction is made between subjects who experience these conditions versus patients who suffer from them. The role of the limbic and autonomic nervous systems and other brain systems involved in cases of bothersome decreased sound tolerance is stressed. The neurophysiological model of tinnitus is outlined with respect to how it may contribute to our understanding of these phenomena and their treatment. © 2015 Elsevier B.V. All rights reserved.

  17. NASA Sounding Rocket Program Educational Outreach

    Science.gov (United States)

    Rosanova, G.

    2013-01-01

    Educational and public outreach is a major focus area for the National Aeronautics and Space Administration (NASA). The NASA Sounding Rocket Program (NSRP) shares in the belief that NASA plays a unique and vital role in inspiring future generations to pursue careers in science, mathematics, and technology. To fulfill this vision, the NSRP engages in a variety of educator training workshops and student flight projects that provide unique and exciting hands-on rocketry and space flight experiences. Specifically, the Wallops Rocket Academy for Teachers and Students (WRATS) is a one-week tutorial laboratory experience for high school teachers to learn the basics of rocketry, as well as build an instrumented model rocket for launch and data processing. The teachers are thus armed with the knowledge and experience to subsequently inspire the students at their home institution. Additionally, the NSRP has partnered with the Colorado Space Grant Consortium (COSGC) to provide a "pipeline" of space flight opportunities to university students and professors. Participants begin by enrolling in the RockOn! Workshop, which guides fledgling rocketeers through the construction and functional testing of an instrumentation kit. This is then integrated into a sealed canister and flown on a sounding rocket payload, which is recovered for the students to retrieve and process their data post flight. The next step in the "pipeline" involves unique, user-defined RockSat-C experiments in a sealed canister that allow participants more independence in developing, constructing, and testing spaceflight hardware. These experiments are flown and recovered on the same payload as the RockOn! Workshop kits. Ultimately, the "pipeline" culminates in the development of an advanced, user-defined RockSat-X experiment that is flown on a payload which provides full exposure to the space environment (not in a sealed canister), and includes telemetry and attitude control capability. The RockOn! and Rock

  18. [Computer-aided Diagnosis and New Electronic Stethoscope].

    Science.gov (United States)

    Huang, Mei; Liu, Hongying; Pi, Xitian; Ao, Yilu; Wang, Zi

    2017-05-30

    Auscultation is an important method in early-diagnosis of cardiovascular disease and respiratory system disease. This paper presents a computer-aided diagnosis of new electronic auscultation system. It has developed an electronic stethoscope based on condenser microphone and the relevant intelligent analysis software. It has implemented many functions that combined with Bluetooth, OLED, SD card storage technologies, such as real-time heart and lung sounds auscultation in three modes, recording and playback, auscultation volume control, wireless transmission. The intelligent analysis software based on PC computer utilizes C# programming language and adopts SQL Server as the background database. It has realized play and waveform display of the auscultation sound. By calculating the heart rate, extracting the characteristic parameters of T1, T2, T12, T11, it can analyze whether the heart sound is normal, and then generate diagnosis report. Finally the auscultation sound and diagnosis report can be sent to mailbox of other doctors, which can carry out remote diagnosis. The whole system has features of fully function, high portability, good user experience, and it is beneficial to promote the use of electronic stethoscope in the hospital, at the same time, the system can also be applied to auscultate teaching and other occasions.

  19. An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air

    Science.gov (United States)

    Papacosta, Pangratios; Linscheid, Nathan

    2016-01-01

    Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the…

  20. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  1. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  2. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  3. Sound as a supportive design intervention for improving health care experience in the clinical ecosystem: A qualitative study.

    Science.gov (United States)

    Iyendo, Timothy Onosahwo

    2017-11-01

    Most prior hospital noise research usually deals with sound in its noise facet and is based merely on sound level abatement, rather than as an informative or orientational element. This paper stimulates scientific research into the effect of sound interventions on physical and mental health care in the clinical environment. Data sources comprised relevant World Health Organization guidelines and the results of a literature search of ISI Web of Science, ProQuest Central, MEDLINE, PubMed, Scopus, JSTOR and Google Scholar. Noise induces stress and impedes the recovery process. Pleasant natural sound intervention which includes singing birds, gentle wind and ocean waves, revealed benefits that contribute to perceived restoration of attention and stress recovery in patients and staff. Clinicians should consider pleasant natural sounds perception as a low-risk non-pharmacological and unobtrusive intervention that should be implemented in their routine care for speedier recovery of patients undergoing medical procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. SoundScapes - Beyond Interaction... in search of the ultimate human-centred interface

    DEFF Research Database (Denmark)

    Brooks, Tony

    2006-01-01

    that can also benefit communication. To achieve this a new generation of intuitive natural interfaces will be required and SoundScapes (see below) is a step toward this goal to discover the ultimate interface for matching the human experience to technology. Emergent hypothesis that have developed...... as a result of the SoundScapes research will be discussed. Introduction to SoundScapes SoundScapes is a contemporary art concept that has become widely known as an interdisciplinary platform for knowledge exchange, innovative product creation of creative and scientific work that uses non-invasive sensor...... Resonance. The multimedia content is adaptable so that the environment is tailored for each participant according to a user profile. This full body movement or the smallest of gesture results in human data input to SoundScapes. The same technology that enables this empowerment is used for performance art...

  5. Techniques and instrumentation for the measurement of transient sound energy flux

    Science.gov (United States)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  6. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    Science.gov (United States)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  7. Development of Optophone with No Diaphragm and Application to Sound Measurement in Jet Flow

    Directory of Open Access Journals (Sweden)

    Yoshito Sonoda

    2012-01-01

    Full Text Available The optophone with no diaphragm, which can detect sound waves without disturbing flow of air and sound field, is presented as a novel sound measurement technique and the present status of development is reviewed in this paper. The method is principally based on the Fourier optics and the sound signal is obtained by detecting ultrasmall diffraction light generated from phase modulation by sounds. The principle and theory, which have been originally developed as a plasma diagnostic technique to measure electron density fluctuations in the nuclear fusion research, are briefly introduced. Based on the theoretical analysis, property and merits as a wave-optical sound detection are presented, and the fundamental experiments and results obtained so far are reviewed. It is shown that sounds from about 100 Hz to 100 kHz can be simultaneously detected by a visible laser beam, and the method is very useful to sound measurement in aeroacoustics. Finally, present main problems of the optophone for practical uses in sound and/or noise measurements and the image of technology expected in the future are shortly shown.

  8. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Science.gov (United States)

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  9. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    Directory of Open Access Journals (Sweden)

    Matthew K Pine

    Full Text Available It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  10. Takete and Maluma in Action: A Cross-Modal Relationship between Gestures and Sounds.

    Directory of Open Access Journals (Sweden)

    Kazuko Shinohara

    Full Text Available Despite Saussure's famous observation that sound-meaning relationships are in principle arbitrary, we now have a substantial body of evidence that sounds themselves can have meanings, patterns often referred to as "sound symbolism". Previous studies have found that particular sounds can be associated with particular meanings, and also with particular static visual shapes. Less well studied is the association between sounds and dynamic movements. Using a free elicitation method, the current experiment shows that several sound symbolic associations between sounds and dynamic movements exist: (1 front vowels are more likely to be associated with small movements than with large movements; (2 front vowels are more likely to be associated with angular movements than with round movements; (3 obstruents are more likely to be associated with angular movements than with round movements; (4 voiced obstruents are more likely to be associated with large movements than with small movements. All of these results are compatible with the results of the previous studies of sound symbolism using static images or meanings. Overall, the current study supports the hypothesis that particular dynamic motions can be associated with particular sounds. Building on the current results, we discuss a possible practical application of these sound symbolic associations in sports instructions.

  11. Takete and Maluma in Action: A Cross-Modal Relationship between Gestures and Sounds.

    Science.gov (United States)

    Shinohara, Kazuko; Yamauchi, Naoto; Kawahara, Shigeto; Tanaka, Hideyuki

    Despite Saussure's famous observation that sound-meaning relationships are in principle arbitrary, we now have a substantial body of evidence that sounds themselves can have meanings, patterns often referred to as "sound symbolism". Previous studies have found that particular sounds can be associated with particular meanings, and also with particular static visual shapes. Less well studied is the association between sounds and dynamic movements. Using a free elicitation method, the current experiment shows that several sound symbolic associations between sounds and dynamic movements exist: (1) front vowels are more likely to be associated with small movements than with large movements; (2) front vowels are more likely to be associated with angular movements than with round movements; (3) obstruents are more likely to be associated with angular movements than with round movements; (4) voiced obstruents are more likely to be associated with large movements than with small movements. All of these results are compatible with the results of the previous studies of sound symbolism using static images or meanings. Overall, the current study supports the hypothesis that particular dynamic motions can be associated with particular sounds. Building on the current results, we discuss a possible practical application of these sound symbolic associations in sports instructions.

  12. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  13. What's in a Name? Sound Symbolism and Gender in First Names.

    Directory of Open Access Journals (Sweden)

    David M Sidhu

    Full Text Available Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b. We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms. Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2. We found that adjectives previously judged to be either descriptive of a figuratively 'round' or a 'sharp' personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning.

  14. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  15. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  16. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  17. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  18. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  19. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  20. Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition.

    Science.gov (United States)

    Popp, Margot; Trumpp, Natalie M; Kiefer, Markus

    2016-01-01

    Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.

  1. A three-layer magnetic shielding for the MAIUS-1 mission on a sounding rocket

    International Nuclear Information System (INIS)

    Kubelka-Lange, André; Herrmann, Sven; Grosse, Jens; Lämmerzahl, Claus; Rasel, Ernst M.; Braxmaier, Claus

    2016-01-01

    Bose-Einstein-Condensates (BECs) can be used as a very sensitive tool for experiments on fundamental questions in physics like testing the equivalence principle using matter wave interferometry. Since the sensitivity of these experiments in ground-based environments is limited by the available free fall time, the QUANTUS project started to perform BEC interferometry experiments in micro-gravity. After successful campaigns in the drop tower, the next step is a space-borne experiment. The MAIUS-mission will be an atom-optical experiment that will show the feasibility of experiments with ultra-cold quantum gases in microgravity in a sounding rocket. The experiment will create a BEC of 10"5 "8"7Rb-atoms in less than 5 s and will demonstrate application of basic atom interferometer techniques over a flight time of 6 min. The hardware is specifically designed to match the requirements of a sounding rocket mission. Special attention is thereby spent on the appropriate magnetic shielding from varying magnetic fields during the rocket flight, since the experiment procedures are very sensitive to external magnetic fields. A three-layer magnetic shielding provides a high shielding effectiveness factor of at least 1000 for an undisturbed operation of the experiment. The design of this magnetic shielding, the magnetic properties, simulations, and tests of its suitability for a sounding rocket flight are presented in this article.

  2. A three-layer magnetic shielding for the MAIUS-1 mission on a sounding rocket

    Energy Technology Data Exchange (ETDEWEB)

    Kubelka-Lange, André, E-mail: andre.kubelka@zarm.uni-bremen.de; Herrmann, Sven; Grosse, Jens; Lämmerzahl, Claus [Center of Applied Space Technology and Microgravity (ZARM), University of Bremen, Am Fallturm, 28359 Bremen (Germany); Rasel, Ernst M. [Institut für Quantenoptik, Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover (Germany); Braxmaier, Claus [Center of Applied Space Technology and Microgravity (ZARM), University of Bremen, Am Fallturm, 28359 Bremen (Germany); DLR Institute for Space Systems, Robert-Hooke-Str. 7, 28359 Bremen (Germany)

    2016-06-15

    Bose-Einstein-Condensates (BECs) can be used as a very sensitive tool for experiments on fundamental questions in physics like testing the equivalence principle using matter wave interferometry. Since the sensitivity of these experiments in ground-based environments is limited by the available free fall time, the QUANTUS project started to perform BEC interferometry experiments in micro-gravity. After successful campaigns in the drop tower, the next step is a space-borne experiment. The MAIUS-mission will be an atom-optical experiment that will show the feasibility of experiments with ultra-cold quantum gases in microgravity in a sounding rocket. The experiment will create a BEC of 10{sup 5} {sup 87}Rb-atoms in less than 5 s and will demonstrate application of basic atom interferometer techniques over a flight time of 6 min. The hardware is specifically designed to match the requirements of a sounding rocket mission. Special attention is thereby spent on the appropriate magnetic shielding from varying magnetic fields during the rocket flight, since the experiment procedures are very sensitive to external magnetic fields. A three-layer magnetic shielding provides a high shielding effectiveness factor of at least 1000 for an undisturbed operation of the experiment. The design of this magnetic shielding, the magnetic properties, simulations, and tests of its suitability for a sounding rocket flight are presented in this article.

  3. Second harmonic sound field after insertion of a biological tissue sample

    Science.gov (United States)

    Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo

    2002-01-01

    Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.

  4. Investigation of fourth sound propagation in HeII in the presence of superflow

    International Nuclear Information System (INIS)

    Andrei, Y.E.

    1980-01-01

    The temperature dependence of a superflow-induced downshift of the fourth sound velocity in HeII confined in various restrictive media was measured. We found that the magnitude of the downshift strongly depends on the restrictive medium, whereas the temperature dependence is universal. The results are interpreted in terms of local superflow velocities approaching the Landau critical velocity. This model provides and understanding of the nature of the downshift and correctly predicts temperature dependence. The results show that the Landau excitation model, even when used at high velocities, where interactions between elementary excitations are substantial, hield good agreement with experiment when a first order correction is introduced to account for these interactions. In a separate series of experiments, fourth sound-like propagation in HeII in a grafoil-filled resonator was observed. The sound velocity was found to be more than an order of magnitude smaller than that of ordinary fourth sound. This significant reduction is explained in terms of a model in which the pore structure in grafoil is pictured as an ensemble of coupled Helmholz resonators

  5. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  6. The Britannica Guide to Sound and Light

    CERN Document Server

    2010-01-01

    Audio and visual cues facilitate some of our most powerful sensory experiences and embed themselves deeply into our memories and subconscious. Sound and light waves interact with our ears and eyes?our biological interpreters?creating a textural experience and relationship with the world around us. This well-researched volume explores the science behind acoustics and optics and the broad application they have to everything from listening to music and watching television to ultrasonic and laser technologies that are crucial to the medical field.

  7. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  8. Sound-proof Sandwich Panel Design via Metamaterial Concept

    Science.gov (United States)

    Sui, Ni

    Sandwich panels consisting of hollow core cells and two face-sheets bonded on both sides have been widely used as lightweight and strong structures in practical engineering applications, but with poor acoustic performance especially at low frequency regime. Basic sound-proof methods for the sandwich panel design are spontaneously categorized as sound insulation and sound absorption. Motivated by metamaterial concept, this dissertation presents two sandwich panel designs without sacrificing weight or size penalty: A lightweight yet sound-proof honeycomb acoustic metamateiral can be used as core material for honeycomb sandwich panels to block sound and break the mass law to realize minimum sound transmission; the other sandwich panel design is based on coupled Helmholtz resonators and can achieve perfect sound absorption without sound reflection. Based on the honeycomb sandwich panel, the mechanical properties of the honeycomb core structure were studied first. By incorporating a thin membrane on top of each honeycomb core, the traditional honeycomb core turns into honeycomb acoustic metamaterial. The basic theory for such kind of membrane-type acoustic metamaterial is demonstrated by a lumped model with infinite periodic oscillator system, and the negative dynamic effective mass density for clamped membrane is analyzed under the membrane resonance condition. Evanescent wave mode caused by negative dynamic effective mass density and impedance methods are utilized to interpret the physical phenomenon of honeycomb acoustic metamaterials at resonance. The honeycomb metamaterials can extraordinarily improve low-frequency sound transmission loss below the first resonant frequency of the membrane. The property of the membrane, the tension of the membrane and the numbers of attached membranes can impact the sound transmission loss, which are observed by numerical simulations and validated by experiments. The sandwich panel which incorporates the honeycomb metamateiral as

  9. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    Science.gov (United States)

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  10. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    Science.gov (United States)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  11. Sound velocity of tantalum under shock compression in the 18–142 GPa range

    Energy Technology Data Exchange (ETDEWEB)

    Xi, Feng, E-mail: xifeng@caep.cn; Jin, Ke; Cai, Lingcang, E-mail: cai-lingcang@aliyun.com; Geng, Huayun; Tan, Ye; Li, Jun [National Key Laboratory of Shock Waves and Detonation Physics, Institute of Fluid Physics, CAEP, P.O. Box 919-102 Mianyang, Sichuan 621999 (China)

    2015-05-14

    Dynamic compression experiments of tantalum (Ta) within a shock pressure range from 18–142 GPa were conducted driven by explosive, a two-stage light gas gun, and a powder gun, respectively. The time-resolved Ta/LiF (lithium fluoride) interface velocity profiles were recorded with a displacement interferometer system for any reflector. Sound velocities of Ta were obtained from the peak state time duration measurements with the step-sample technique and the direct-reverse impact technique. The uncertainty of measured sound velocities were analyzed carefully, which suggests that the symmetrical impact method with step-samples is more accurate for sound velocity measurement, and the most important parameter in this type experiment is the accurate sample/window particle velocity profile, especially the accurate peak state time duration. From these carefully analyzed sound velocity data, no evidence of a phase transition was found up to the shock melting pressure of Ta.

  12. X-Ray Radiographic Observation of Directional Solidification Under Microgravity: XRMON-GF Experiments on MASER12 Sounding Rocket Mission

    Science.gov (United States)

    Reinhart, G.; NguyenThi, H.; Bogno, A.; Billia, B.; Houltz, Y.; Loth, K.; Voss, D.; Verga, A.; dePascale, F.; Mathiesen, R. H.; hide

    2012-01-01

    The European Space Agency (ESA) - Microgravity Application Promotion (MAP) programme entitled XRMON (In situ X-Ray MONitoring of advanced metallurgical processes under microgravity and terrestrial conditions) aims to develop and perform in situ X-ray radiography observations of metallurgical processes in microgravity and terrestrial environments. The use of X-ray imaging methods makes it possible to study alloy solidification processes with spatio-temporal resolutions at the scales of relevance for microstructure formation. XRMON has been selected for MASER 12 sounding rocket experiment, scheduled in autumn 2011. Although the microgravity duration is typically six minutes, this short time is sufficient to investigate a solidification experiment with X-ray radiography. This communication will report on the preliminary results obtained with the experimental set-up developed by SSC (Swedish Space Corporation). Presented results dealing with directional solidification of Al-Cu confirm the great interest of performing in situ characterization to analyse dynamical phenomena during solidification processes.

  13. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  14. Experimenting with Brass Musical Instruments.

    Science.gov (United States)

    LoPresto, Michael C.

    2003-01-01

    Describes experiments to address the properties of brass musical instruments that can be used to demonstrate sound in any level physics course. The experiments demonstrate in a quantitative fashion the effects of the mouthpiece and bell on the frequencies of sound waves and thus the musical pitches produced. (Author/NB)

  15. The propagation of sound in narrow street canyons

    Science.gov (United States)

    Iu, K. K.; Li, K. M.

    2002-08-01

    This paper addresses an important problem of predicting sound propagation in narrow street canyons with width less than 10 m, which are commonly found in a built-up urban district. Major noise sources are, for example, air conditioners installed on building facades and powered mechanical equipment for repair and construction work. Interference effects due to multiple reflections from building facades and ground surfaces are important contributions in these complex environments. Although the studies of sound transmission in urban areas can be traced back to as early as the 1960s, the resulting mathematical and numerical models are still unable to predict sound fields accurately in city streets. This is understandable because sound propagation in city streets involves many intriguing phenomena such as reflections and scattering at the building facades, diffusion effects due to recessions and protrusions of building surfaces, geometric spreading, and atmospheric absorption. This paper describes the development of a numerical model for the prediction of sound fields in city streets. To simplify the problem, a typical city street is represented by two parallel reflecting walls and a flat impedance ground. The numerical model is based on a simple ray theory that takes account of multiple reflections from the building facades. The sound fields due to the point source and its images are summed coherently such that mutual interference effects between contributing rays can be included in the analysis. Indoor experiments are conducted in an anechoic chamber. Experimental data are compared with theoretical predictions to establish the validity and usefulness of this simple model. Outdoor experimental measurements have also been conducted to further validate the model. copyright 2002 Acoustical Society of America.

  16. Acoustic effects of oil-production activities on bowhead and white whales visible during spring migration near Pt. Barrow, Alaska-1990 phase: sound propagation and whale responses to playbacks of continuous drilling noise from an ice platform, as studied in pack ice conditions. Final report

    International Nuclear Information System (INIS)

    Richardson, W.J.; Greene, C.R.; Koski, W.R.; Smultea, M.A.; Cameron, G.

    1991-10-01

    The report concerns the effects of underwater noise from simulated oil production operations on the movements and behavior of bowhead and white whales migrating around northern Alaska in spring. An underwater sound projector suspended from pack ice was used to introduce recorded drilling noise and other test sounds into leads through the pack ice. These sounds were received and measured at various distances to determine the rate of sound attenuation with distance and frequency. The movements and behavior of bowhead and white whales approaching the operating projector were studied by aircraft- and ice-based observers. Some individuals of both species were observed to approach well within the ensonified area. However, behavioral changes and avoidance reactions were evident when the received sound level became sufficiently high. Reactions to aircraft are also discussed

  17. Mixed-order Ambisonics recording and playback for improving horizontal directionality

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Marschall, Marton; Käsbach, Johannes

    2011-01-01

    Planar (2D) and periphonic (3D) higher-order Ambisonics (HOA) systems are widely used to reproduce spatial properties of acoustic scenarios. Mixed-order Ambisonics (MOA) systems combine the benet of higher order 2D systems, i.e. a high spatial resolution over a larger usable frequency bandwidth......, with a lower order 3D system to reproduce elevated sound sources. In order to record MOA signals, the location of the microphones on a hard sphere were optimized to provide a robust MOA encoding. A detailed analysis of the encoding and decoding process showed that MOA can improve both the spatial resolution...

  18. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  19. Spatial avoidance to experimental increase of intermittent and continuous sound in two captive harbour porpoises.

    Science.gov (United States)

    Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans

    2018-02-01

    The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  1. Sleep disturbance caused by meaningful sounds and the effect of background noise

    Science.gov (United States)

    Namba, Seiichiro; Kuwano, Sonoko; Okamoto, Takehisa

    2004-10-01

    To study noise-induced sleep disturbance, a new procedure called "noise interrupted method"has been developed. The experiment is conducted in the bedroom of the house of each subject. The sounds are reproduced with a mini-disk player which has an automatic reverse function. If the sound is disturbing and subjects cannot sleep, they are allowed to switch off the sound 1 h after they start to try to sleep. This switch off (noise interrupted behavior) is an important index of sleep disturbance. Next morning they fill in a questionnaire in which quality of sleep, disturbance of sounds, the time when they switched off the sound, etc. are asked. The results showed a good relationship between L and the percentages of the subjects who could not sleep in an hour and between L and the disturbance reported in the questionnaire. This suggests that this method is a useful tool to measure the sleep disturbance caused by noise under well-controlled conditions.

  2. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  3. Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion

    Directory of Open Access Journals (Sweden)

    Annett eSchirmer

    2016-01-01

    Full Text Available For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgments implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.

  4. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  5. Four odontocete species change hearing levels when warned of impending loud sound.

    Science.gov (United States)

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. The effect of looming and receding sounds on the perceived in-depth orientation of depth-ambiguous biological motion figures.

    Directory of Open Access Journals (Sweden)

    Ben Schouten

    Full Text Available BACKGROUND: The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws is affected by the presentation of looming or receding sounds synchronized with the footsteps. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming, falling (receding or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding perspective cues is visually most looming becomes harder (easier when the orthographic plw is paired with looming sounds. CONCLUSIONS/SIGNIFICANCE: The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds

  7. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  8. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  9. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  10. Effects of 3D sound on visual scanning

    NARCIS (Netherlands)

    Veltman, J.A.; Bronkhorst, A.W.; Oving, A.B.

    2000-01-01

    An experiment was conducted in a flight simulator to explore the effectiveness of a 3D sound display as support to visual information from a head down display (HDD). Pilots had to perform two main tasks in separate conditions: intercepting and following a target jet. Performance was measured for

  11. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  12. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  13. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  14. Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized

    Science.gov (United States)

    Golubock, Jason L.; Janata, Petr

    2013-01-01

    Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…

  15. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  16. Second sound velocities in superfluid 3He-4He solutions

    International Nuclear Information System (INIS)

    Dikina, L.S.; Kotenev, G.Ya.; Rudavskij, Eh.Ya.

    1978-01-01

    The velocities of the second sound in the superfluid He 3 -He 4 solutions were measured by the pulse method in the range of temperatures from 1.3 K to Tsub(lambda) and for He 3 concentrations up to 13%.The results obtained supplemented by those available before give the complete description of the concentration and temperature dependences of the second sound velocity in superfluid He 3 -He 4 solutions. The comprehensive comparison of the experimental data on the velocity of the second sound with the theoretical calculations for the superfluid solutions with arbitrary content of He 3 is performed. The good agreement is found between experiment and the theory. The experimental data obtained are used for determination of the potential, which determines the properties of the superfluid solutions

  17. Microscopic theory of longitudinal sound velocity in charge ordered manganites

    International Nuclear Information System (INIS)

    Rout, G C; Panda, S

    2009-01-01

    A microscopic theory of longitudinal sound velocity in a manganite system is reported here. The manganite system is described by a model Hamiltonian consisting of charge density wave (CDW) interaction in the e g band, an exchange interaction between spins of the itinerant e g band electrons and the core t 2g electrons, and the Heisenberg interaction of the core level spins. The magnetization and the CDW order parameters are considered within mean-field approximations. The phonon Green's function was calculated by Zubarev's technique and hence the longitudinal velocity of sound was finally calculated for the manganite system. The results show that the elastic spring involved in the velocity of sound exhibits strong stiffening in the CDW phase with a decrease in temperature as observed in experiments.

  18. Microscopic theory of longitudinal sound velocity in charge ordered manganites

    Energy Technology Data Exchange (ETDEWEB)

    Rout, G C [Condensed Matter Physics Group, PG Department of Applied Physics and Ballistics, FM University, Balasore 756 019 (India); Panda, S, E-mail: gcr@iopb.res.i [Trident Academy of Technology, F2/A, Chandaka Industrial Estate, Bhubaneswar 751 024 (India)

    2009-10-14

    A microscopic theory of longitudinal sound velocity in a manganite system is reported here. The manganite system is described by a model Hamiltonian consisting of charge density wave (CDW) interaction in the e{sub g} band, an exchange interaction between spins of the itinerant e{sub g} band electrons and the core t{sub 2g} electrons, and the Heisenberg interaction of the core level spins. The magnetization and the CDW order parameters are considered within mean-field approximations. The phonon Green's function was calculated by Zubarev's technique and hence the longitudinal velocity of sound was finally calculated for the manganite system. The results show that the elastic spring involved in the velocity of sound exhibits strong stiffening in the CDW phase with a decrease in temperature as observed in experiments.

  19. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  20. Sensory suppression of brain responses to self-generated sounds is observed with and without the perception of agency.

    Science.gov (United States)

    Timm, Jana; Schönwiesner, Marc; Schröger, Erich; SanMiguel, Iria

    2016-07-01

    Stimuli caused by our own movements are given special treatment in the brain. Self-generated sounds evoke a smaller brain response than externally generated ones. This attenuated response may reflect a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input. It may also relate to the feeling of being the agent of the movement and its effects, but little is known about how sensory suppression of brain responses to self-generated sounds is related to judgments of agency. To address this question, we recorded event-related potentials in response to sounds initiated by button presses. In one condition, participants perceived agency over the production of the sounds, whereas in another condition, participants experience an illusory lack of agency caused by changes in the delay between actions and effects. We compared trials in which the timing of button press and sound was physically identical, but participants' agency judgment differed. Results show reduced amplitudes of the auditory N1 component in response to self-generated sounds irrespective of agency experience, whilst P2 effects correlate with the perception of agency. Our findings suggest that suppression of the auditory N1 component to self-generated sounds does not depend on adaptation to specific action-effect time delays, and does not determine agency judgments, however, the suppression of the P2 component might relate more directly to the experience of agency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The Dual-channel Extreme Ultraviolet Continuum Experiment: Sounding Rocket EUV Observations of Local B Stars to Determine Their Potential for Supplying Intergalactic Ionizing Radiation

    Science.gov (United States)

    Erickson, Nicholas; Green, James C.; France, Kevin; Stocke, John T.; Nell, Nicholas

    2018-06-01

    We describe the scientific motivation and technical development of the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE). DEUCE is a sounding rocket payload designed to obtain the first flux-calibrated spectra of two nearby B stars in the EUV 650-1150Å bandpass. This measurement will help in understanding the ionizing flux output of hot B stars, calibrating stellar models and commenting on the potential contribution of such stars to reionization. DEUCE consists of a grazing incidence Wolter II telescope, a normal incidence holographic grating, and the largest (8” x 8”) microchannel plate detector ever flown in space, covering the 650-1150Å band in medium and low resolution channels. DEUCE will launch on December 1, 2018 as NASA/CU sounding rocket mission 36.331 UG, observing Epsilon Canis Majoris, a B2 II star.

  2. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  3. Experiments in Area of Musical Sound in the Chamber and Instrumental Works by Rodion Shchedrin

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-10-01

    Full Text Available The article scientifically proves the peculiarities of Rodion Shchedrin’s musical thinking. Having analysed such piano works by Rodion Shchedrin as "Imitating Albéniz", "Humoresque", arranged for violin and piano by D. M. Tsyganov, "Balalaika" for solo violin without a bow, there have been identified composer’s innovations in the field of violin sound. It has been proved that the search for new expressive violin-coloristic resources was due to the desire of the composer to discover the new worlds of sound, create an original work, which would masterfully implement the most complex creative tasks.

  4. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    Science.gov (United States)

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  5. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  6. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    Science.gov (United States)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  7. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  8. Interactive Sonification of Spontaneous Movement of Children - Cross-modal Mapping and the Perception of Body Movement Qualities through Sound

    Directory of Open Access Journals (Sweden)

    Emma Frid

    2016-11-01

    Full Text Available In this paper we present three studies focusing on the effect of different sound models ininteractive sonification of bodily movement. We hypothesized that a sound model characterizedby continuous smooth sounds would be associated with other movement characteristics thana model characterized by abrupt variation in amplitude and that these associations could bereflected in spontaneous movement characteristics. Three subsequent studies were conductedto investigate the relationship between properties of bodily movement and sound: (1 a motioncapture experiment involving interactive sonification of a group of children spontaneously movingin a room, (2 an experiment involving perceptual ratings of sonified movement data and (3an experiment involving matching between sonified movements and their visualizations in theform of abstract drawings. In (1 we used a system constituting of 17 IR cameras trackingpassive reflective markers. The head positions in the horizontal plane of 3-4 children weresimultaneously tracked and sonified, producing 3-4 sound sources spatially displayed throughan 8-channel loudspeaker system. We analyzed children’s spontaneous movement in termsof energy-, smoothness- and directness index. Despite large inter-participant variability andgroup-specific effects caused by interaction among children when engaging in the spontaneousmovement task, we found a small but significant effect of sound model. Results from (2 indicatethat different sound models can be rated differently on a set of motion-related perceptual scales(e.g. expressivity and fluidity. Also, results imply that audio-only stimuli can evoke strongerperceived properties of movement (e.g. energetic, impulsive than stimuli involving both audioand video representations. Findings in (3 suggest that sounds portraying bodily movementcan be represented using abstract drawings in a meaningful way. We argue that the resultsfrom these studies support the existence of a cross

  9. WAVE: Interactive Wave-based Sound Propagation for Virtual Environments.

    Science.gov (United States)

    Mehra, Ravish; Rungta, Atul; Golas, Abhinav; Ming Lin; Manocha, Dinesh

    2015-04-01

    We present an interactive wave-based sound propagation system that generates accurate, realistic sound in virtual environments for dynamic (moving) sources and listeners. We propose a novel algorithm to accurately solve the wave equation for dynamic sources and listeners using a combination of precomputation techniques and GPU-based runtime evaluation. Our system can handle large environments typically used in VR applications, compute spatial sound corresponding to listener's motion (including head tracking) and handle both omnidirectional and directional sources, all at interactive rates. As compared to prior wave-based techniques applied to large scenes with moving sources, we observe significant improvement in runtime memory. The overall sound-propagation and rendering system has been integrated with the Half-Life 2 game engine, Oculus-Rift head-mounted display, and the Xbox game controller to enable users to experience high-quality acoustic effects (e.g., amplification, diffraction low-passing, high-order scattering) and spatial audio, based on their interactions in the VR application. We provide the results of preliminary user evaluations, conducted to study the impact of wave-based acoustic effects and spatial audio on users' navigation performance in virtual environments.

  10. Reconstruction of sound speed profile through natural generalized inverse technique

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Somayajulu, Y.K.; Murty, C.S.

    An acoustic model has been developed for reconstruction of vertical sound speed in a near stable or stratified ocean. Generalized inverse method is utilised in the model development. Numerical experiments have been carried out to account...

  11. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  12. Listen to the band! How sound can realize group identity and enact intergroup domination.

    Science.gov (United States)

    Shayegh, John; Drury, John; Stevenson, Clifford

    2017-03-01

    Recent research suggests that sound appraisal can be moderated by social identity. We validate this finding, and also extend it, by examining the extent to which sound can also be understood as instrumental in intergroup relations. We interviewed nine members of a Catholic enclave in predominantly Protestant East Belfast about their experiences of an outgroup (Orange Order) parade, where intrusive sound was a feature. Participants reported experiencing the sounds as a manifestation of the Orange Order identity and said that it made them feel threatened and anxious because they felt it was targeted at them by the outgroup (e.g., through aggressive volume increases). There was also evidence that the sounds produced community disempowerment, which interviewees explicitly linked to the invasiveness of the music. Some interviewees described organizing to collectively 'drown out' the bands' sounds, an activity which appeared to be uplifting. These findings develop the elaborated social identity model of empowerment, by showing that intergroup struggle and collective self-objectification can operate through sound as well as through physical actions. © 2016 The British Psychological Society.

  13. Cascaded Amplitude Modulations in Sound Texture Perception

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2017-01-01

    . In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture...... model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures....... In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model...

  14. Behavioral changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish.

    Science.gov (United States)

    Neo, Yik Yaw; Parie, Lisa; Bakker, Frederique; Snelderwaard, Peter; Tudorache, Christian; Schaaf, Marcel; Slabbekoorn, Hans

    2015-01-01

    Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioral patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio). The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa) can affect the swimming behavior of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa) and another with ambient noise levels (89 dB re 1 μPa). Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behavior to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behavior in any captive condition.

  15. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more

  16. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  17. Determining the speed of sound in the air by sound wave interference

    Science.gov (United States)

    Silva, Abel A.

    2017-07-01

    Mechanical waves propagate through material media. Sound is an example of a mechanical wave. In fluids like air, sound waves propagate through successive longitudinal perturbations of compression and decompression. Audible sound frequencies for human ears range from 20 to 20 000 Hz. In this study, the speed of sound v in the air is determined using the identification of maxima of interference from two synchronous waves at frequency f. The values of v were correct to 0 °C. The experimental average value of {\\bar{ν }}\\exp =336 +/- 4 {{m}} {{{s}}}-1 was found. It is 1.5% larger than the reference value. The standard deviation of 4 m s-1 (1.2% of {\\bar{ν }}\\exp ) is an improved value by the use of the concept of the central limit theorem. The proposed procedure to determine the speed of sound in the air aims to be an academic activity for physics classes of scientific and technological courses in college.

  18. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  19. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  20. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration

    OpenAIRE

    Park, Saebyul; Ban, Seonghoon; Hong, Dae Ryong; Yeo, Woon Seung

    2013-01-01

    SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of in...

  1. Irrelevant sound disrupts speech production: exploring the relationship between short-term memory and experimentally induced slips of the tongue.

    Science.gov (United States)

    Saito, Satoru; Baddeley, Alan

    2004-10-01

    To explore the relationship between short-term memory and speech production, we developed a speech error induction technique. The technique, which was adapted from a Japanese word game, exposed participants to an auditory distractor word immediately before the utterance of a target word. In Experiment 1, the distractor words that were phonologically similar to the target word led to a greater number of errors in speaking the target than did the dissimilar distractor words. Furthermore, the speech error scores were significantly correlated with memory span scores. In Experiment 2, memory span scores were again correlated with the rate of the speech errors that were induced from the task-irrelevant speech sounds. Experiment 3 showed a strong irrelevant-sound effect in the serial recall of nonwords. The magnitude of the irrelevant-sound effects was not affected by phonological similarity between the to-be-remembered nonwords and the irrelevant-sound materials. Analysis of recall errors in Experiment 3 also suggested that there were no essential differences in recall error patterns between the dissimilar and similar irrelevant-sound conditions. We proposed two different underlying mechanisms in immediate memory, one operating via the phonological short-term memory store and the other via the processes underpinning speech production.

  2. Cascaded Amplitude Modulations in Sound Texture Perception

    Directory of Open Access Journals (Sweden)

    Richard McWalter

    2017-09-01

    Full Text Available Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  3. How does Architecture Sound for Different Musical Instrument Performances?

    DEFF Research Database (Denmark)

    Saher, Konca; Rindel, Jens Holger

    2006-01-01

    This paper discusses how consideration of sound _in particular a specific musical instrument_ impacts the design of a room. Properly designed architectural acoustics is fundamental to improve the listening experience of an instrument in rooms in a conservatory. Six discrete instruments (violin, c...... different instruments and the choir experience that could fit into same category of room. For all calculations and the auralizations, a computational model is used: ODEON 7.0....

  4. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  5. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  6. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  7. (and Sound) of SiMPE

    DEFF Research Database (Denmark)

    Erkut, Cumhur; Jylhä, Antti; Serafin, Stefania

    application stores. As proposed in the ACM Curriculum report, these application domains are attractive in education, especially in computer science and interaction design. The main question is how to systematically integrate the rapidly evolving knowledge, know-how, tools, and techniques of mobile (audio......) programming and interaction design into university curricula. In this paper, we provide an account of our own development and teaching experience. We highlight the outcomes, which are exploratory applications challenging the state-of-the-art in mobile application development based on non-speech sound...

  8. The Swedish sounding rocket programme

    International Nuclear Information System (INIS)

    Bostroem, R.

    1980-01-01

    Within the Swedish Sounding Rocket Program the scientific groups perform experimental studies of magnetospheric and ionospheric physics, upper atmosphere physics, astrophysics, and material sciences in zero g. New projects are planned for studies of auroral electrodynamics using high altitude rockets, investigations of noctilucent clouds, and active release experiments. These will require increased technical capabilities with respect to payload design, rocket performance and ground support as compared with the current program. Coordination with EISCAT and the planned Viking satellite is essential for the future projects. (Auth.)

  9. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  10. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  11. Ultracold Fermi and Bose gases and Spinless Bose Charged Sound Particles

    Directory of Open Access Journals (Sweden)

    Minasyan V.

    2011-10-01

    Full Text Available We propose a novel approach for investigation of the motion of Bose or Fermi liquid (or gas which consists of decoupled electrons and ions in the uppermost hyperfine state. Hence, we use such a concept as the fluctuation motion of “charged fluid particles” or “charged fluid points” representing a charged longitudinal elastic wave. In turn, this elastic wave is quantized by spinless longitudinal Bose charged sound particles with the rest mass m and charge e 0 . The existence of spinless Bose charged sound particles allows us to present a new model for description of Bose or Fermi liquid via a non-ideal Bose gas of charged sound particles . In this respect, we introduce a new postulation for the superfluid component of Bose or Fermi liquid determined by means of charged sound particles in the condensate, which may explain the results of experiments connected with ultra-cold Fermi gases of spin-polarized hydrogen, 6 Li and 40 K, and such a Bose gas as 87 Rb in the uppermost hyperfine state, where the Bose- Einstein condensation of charged sound particles is realized by tuning the magnetic field.

  12. 77 FR 37318 - Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort...

    Science.gov (United States)

    2012-06-21

    ...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...

  13. Coronal Radio Sounding Experiments with Mars Express: Scintillation Spectra during Low Solar Activity

    International Nuclear Information System (INIS)

    Efimov, A. I.; Lukanina, L. A.; Samoznaev, L. N.; Rudash, V. K.; Chashei, I. V.; Bird, M. K.; Paetzold, M.; Tellmann, S.

    2010-01-01

    Coronal radio sounding observations were carried out with the radio science experiment MaRS on the ESA spacecraft Mars Express during the period from 25 August to 22 October 2004. Differential frequency and log-amplitude fluctuations of the dual-frequency signals were recorded during a period of low solar activity. The data are applicable to low heliographic latitudes, i.e. to slow solar wind. The mean frequency fluctuation and power law index of the frequency fluctuation temporal spectra are determined as a function of heliocentric distance. The radial dependence of the frequency fluctuation spectral index α reflects the previously documented flattening of the scintillation power spectra in the solar wind acceleration region. Temporal spectra of S-band and X-band normalized log-amplitude fluctuations were investigated over the range of fluctuation frequencies 0.01 Hz<ν<0.5 Hz, where the spectral density is approximately constant. The radial variation of the spectral density is analyzed and compared with Ulysses 1991 data, a period of high solar activity. Ranging measurements are presented and compared with frequency and log-amplitude scintillation data. Evidence for a weak increase in the fractional electron density turbulence level is obtained in the range 10-40 solar radii.

  14. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  15. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  16. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  17. Measurement of sound velocity made easy using harmonic resonant frequencies with everyday mobile technology

    Science.gov (United States)

    Hirth, Michael; Kuhn, Jochen; Müller, Andreas

    2015-02-01

    Recent articles about smartphone experiments have described their applications as experimental tools in different physical contexts.1-4 They have established that smartphones facilitate experimental setups, thanks to the small size and diverse functions of mobile devices, in comparison to setups with computer-based measurements. In the experiment described in this article, the experimental setup is reduced to a minimum. The objective of the experiment is to determine the speed of sound with a high degree of accuracy using everyday tools. An article published recently proposes a time-of-flight method where sound or acoustic pulses are reflected at the ends of an open tube.5 In contrast, the following experiment idea is based on the harmonic resonant frequencies of such a tube, simultaneously triggered by a noise signal.

  18. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    Science.gov (United States)

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  19. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  20. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  1. Joint efforts to harmonize sound insulation descriptors and classification schemes in Europe (COST TU0901)

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2010-01-01

    Sound insulation descriptors, regulatory requirements and classification schemes in Europe represent a high degree of diversity. One implication is very little exchange of experience of housing design and construction details for different levels of sound insulation; another is trade barriers...... for building systems and products. Unfortunately, there is evidence for a development in the "wrong" direction. For example, sound classification schemes for dwellings exist in nine countries. There is no sign on increasing harmonization, rather the contrary, as more countries are preparing proposals with new......, new housing must meet the needs of the people and offer comfort. Also for existing housing, sound insulation aspects should be taken into account, when renovating housing; otherwise the renovation is not “sustainable”. A joint European Action, COST TU0901 "Integrating and Harmonizing Sound Insulation...

  2. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  3. How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Tjeerd C. Andringa

    2013-04-01

    Full Text Available This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.

  4. Costs of suppressing emotional sound and countereffects of a mindfulness induction: an experimental analog of tinnitus impact.

    Directory of Open Access Journals (Sweden)

    Hugo Hesser

    Full Text Available Tinnitus is the experience of sounds without an appropriate external auditory source. These auditory sensations are intertwined with emotional and attentional processing. Drawing on theories of mental control, we predicted that suppressing an affectively negative sound mimicking the psychoacoustic features of tinnitus would result in decreased persistence in a mentally challenging task (mental arithmetic that required participants to ignore the same sound, but that receiving a mindfulness exercise would reduce this effect. Normal hearing participants (N = 119 were instructed to suppress an affectively negative sound under cognitive load or were given no such instructions. Next, participants received either a mindfulness induction or an attention control task. Finally, all participants worked with mental arithmetic while exposed to the same sound. The length of time participants could persist in the second task served as the dependent variable. As hypothesized, results indicated that an auditory suppression rationale reduced time of persistence relative to no such rationale, and that a mindfulness induction counteracted this detrimental effect. The study may offer new insights into the mechanisms involved in the development of tinnitus interference. Implications are also discussed in the broader context of attention control strategies and the effects of emotional sound on task performance. The ironic processes of mental control may have an analog in the experience of sounds.

  5. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  6. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  7. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    Science.gov (United States)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  8. Sounding rocket experiments during the IMS period at Syowa Station, Antarctica

    International Nuclear Information System (INIS)

    Hirasawa, T.; Nagata, T.

    1979-01-01

    During IMS Period, 19 sounding rockets were launched into auroras at various stages of polar substorms from Syowa Station (Geomag. lat. = -69.6 0 , Geomag. log. = 77.1 0 ), Antarctica. Through the successful rocket flights, the significant physical quantities in auroras were obtained: 19 profiles of electron density and temperature, 11 energy spectra of precipitating electrons, 15 frequency spectra of VLF and HF plasma waves and 4 vertical profiles of electric and magnetic fields. These rocket data have been analyzed and compared with the coordinated ground-based observation data for studies of polar substorms. (author)

  9. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  10. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  11. Meaning From Environmental Sounds: Types of Signal-Referent Relations and Their Effect on Recognizing Auditory Icons

    Science.gov (United States)

    Keller, Peter; Stevens, Catherine

    2004-01-01

    This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…

  12. Time-of-Flight Measurement of the Speed of Sound in a Metal Bar

    Science.gov (United States)

    Ganci, Salvatore

    2016-01-01

    A simple setup was designed for a "time-of-flight" measurement of the sound speed in a metal bar. The experiment requires low cost components and is very simple to understand by students. A good use of it is as a demonstration experiment.

  13. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  14. Lymphocytes on sounding rocket flights.

    Science.gov (United States)

    Cogoli-Greuter, M; Pippia, P; Sciola, L; Cogoli, A

    1994-05-01

    Cell-cell interactions and the formation of cell aggregates are important events in the mitogen-induced lymphocyte activation. The fact that the formation of cell aggregates is only slightly reduced in microgravity suggests that cells are moving and interacting also in space, but direct evidence was still lacking. Here we report on two experiments carried out on a flight of the sounding rocket MAXUS 1B, launched in November 1992 from the base of Esrange in Sweden. The rocket reached the altitude of 716 km and provided 12.5 min of microgravity conditions.

  15. Predator sound playbacks reveal strong avoidance responses in a fight strategist baleen whale

    NARCIS (Netherlands)

    Curé, C.; Doksæter Sivle, L.; Visser, F.; Wensveen, P.J.; Isojunno, S.; Harris, C.M.; Kvadsheim, P.H.; Lam, F.P.; Millewr, P.J.O.

    2015-01-01

    Anti-predator strategies are often defined as ‘flight’ or ‘fight’, based upon prey anatomical adaptations for size, morphology and weapons, as well as observed behaviours in the presence of predators. The humpback whale Megaptera nova eangliae is considered a ‘fight’ specialist based upon anatomy

  16. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    Science.gov (United States)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  17. On second and fourth sound in helium II and their application as acoustical probes of superfluid turbulence

    International Nuclear Information System (INIS)

    Goeje, M.P. de.

    1986-01-01

    Second sound, in which the normal and the superfluid fraction move in opposite directions, is very suitable as probe of superfluid turbulence. Owing to viscous effects, the application of second sound is restricted to relatively high frequencies in relatively wide tubes. Up to now no attempts are reported in literature to use fourth sound as a probe in narrow tubes - fourth sound being the sound mode in which only the superfluid fraction takes part. This thesis is divided into two parts. The first part describes the use of second sound as a probe to investigate superfluid turbulence, generated by a heat flow in a relatively wide flow tube. Part two treats an investigation of the damping of a fourth-sound oscillator, as well as the question to which extent fourth sound can be used as a probe of superfluid turbulence in relatively narrow capillaries. In both experiments standing waves have been used, generated in a Helmholtz oscillator. (Auth.)

  18. Convection measurement package for space processing sounding rocket flights. [low gravity manufacturing - fluid dynamics

    Science.gov (United States)

    Spradley, L. W.

    1975-01-01

    The effects on heated fluids of nonconstant accelerations, rocket vibrations, and spin rates, was studied. A system is discussed which can determine the influence of the convective effects on fluid experiments. The general suitability of sounding rockets for performing these experiments is treated. An analytical investigation of convection in an enclosure which is heated in low gravity is examined. The gravitational body force was taken as a time-varying function using anticipated sounding rocket accelerations, since accelerometer flight data were not available. A computer program was used to calculate the flow rates and heat transfer in fluids with geometries and boundary conditions typical of space processing configurations. Results of the analytical investigation identify the configurations, fluids and boundary values which are most suitable for measuring the convective environment of sounding rockets. A short description of fabricated fluid cells and the convection measurement package is given. Photographs are included.

  19. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  20. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  1. Sound produced by an oscillating arc in a high-pressure gas

    Science.gov (United States)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  2. 33 CFR 334.410 - Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... AND RESTRICTED AREA REGULATIONS § 334.410 Albemarle Sound, Pamlico Sound, and adjacent waters, NC; danger zones for naval aircraft operations. (a) Target areas—(1) North Landing River (Currituck Sound...

  3. EXTRACTION OF SPATIAL PARAMETERS FROM CLASSIFIED LIDAR DATA AND AERIAL PHOTOGRAPH FOR SOUND MODELING

    Directory of Open Access Journals (Sweden)

    S. Biswas

    2012-07-01

    Full Text Available Prediction of outdoor sound levels in 3D space is important for noise management, soundscaping etc. Sound levels at outdoor can be predicted using sound propagation models which need terrain parameters. The existing practices of incorporating terrain parameters into models are often limited due to inadequate data or inability to determine accurate sound transmission paths through a terrain. This leads to poor accuracy in modelling. LIDAR data and Aerial Photograph (or Satellite Images provide opportunity to incorporate high resolution data into sound models. To realize this, identification of building and other objects and their use for extraction of terrain parameters are fundamental. However, development of a suitable technique, to incorporate terrain parameters from classified LIDAR data and Aerial Photograph, for sound modelling is a challenge. Determination of terrain parameters along various transmission paths of sound from sound source to a receiver becomes very complex in an urban environment due to the presence of varied and complex urban features. This paper presents a technique to identify the principal paths through which sound transmits from source to receiver. Further, the identified principal paths are incorporated inside the sound model for sound prediction. Techniques based on plane cutting and line tracing are developed for determining principal paths and terrain parameters, which use various information, e.g., building corner and edges, triangulated ground, tree points and locations of source and receiver. The techniques developed are validated through a field experiment. Finally efficacy of the proposed technique is demonstrated by developing a noise map for a test site.

  4. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    Science.gov (United States)

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  6. Self-mixing laser Doppler vibrometry with high optical sensitivity application to real-time sound reproduction

    CERN Document Server

    Abe, K; Ko, J Y

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 mu Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  7. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    International Nuclear Information System (INIS)

    Abe, Kazutaka; Otsuka, Kenju; Ko, Jing-Yuan

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 μPa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio

  8. Self-mixing laser Doppler vibrometry with high optical sensitivity: application to real-time sound reproduction

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Kazutaka [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Otsuka, Kenju [Department of Human and Information Science, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa (Japan); Ko, Jing-Yuan [Department of Physics, Tunghai University, 181 Taichung-kang Road, Section 3, Taichung 407, Taiwan (China)

    2003-01-01

    Nanometre vibration measurement of an audio speaker and a highly sensitive sound reproduction experiment have been successfully demonstrated by a self-aligned optical feedback vibrometry technique using the self-mixing modulation effect in a laser-diode-pumped microchip solid-state laser. By applying nanometre vibrations to the speaker, which produced nearly inaudible music below 20 dB (200 {mu}Pa) sound pressure level, we could reproduce clear sound in real time by the use of a simple frequency modulated wave demodulation circuit with a -120 dB light-intensity feedback ratio.

  9. How do males of Hypsiboas goianus (Hylidae: Anura respond to conspecific acoustic stimuli?

    Directory of Open Access Journals (Sweden)

    Alessandro R. Morais

    2015-12-01

    Full Text Available ABSTRACT Acoustic communication plays an important role in the social behavior of anurans. Acoustic signals, which can be used in different contexts such as mate attraction and territory defense, may mediate social interactions among individuals. Herein, we used playback experiments to test whether males of Hypsiboas goianus (Lutz, 1968 change their vocal behavior in response to conspecific advertisement calls. Specifically, we used different field playback experiments in which we modified the time interval between advertisement calls to simulate males with distinct states of motivation (Sequence A and B. We did not observe differences in the acoustic response of males of H. goianus between the two types of field playback experiments. On the other hand, we observed that H. goianus males reduce the dominant frequency of the advertisement call and increase the rate of aggressive calls in response to a conspecific competitor. Our results suggest that the acoustic plasticity observed in males of H. goianus represents an aggressive response that allows repelling conspecific individuals.

  10. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  11. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  12. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  13. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  14. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  15. Experimental investigation of sound absorption properties of perforated date palm fibers panel

    International Nuclear Information System (INIS)

    Elwaleed, A K; Nikabdullah, N; Nor, M J M; Tahir, M F M; Zulkifli, R

    2013-01-01

    This paper presents the sound absorption properties of a natural waste of date palm fiber perforated panel. A single layer of the date palm fibers was tested in this study for its sound absorption properties. The experimental measurements were carried out using impedance tube at the acoustic lab, Faculty of Engineering, Universiti Kebangsaan Malaysia. The experiment was conducted for the panel without air gap, with air gap and with perforated plate facing. Three air gap thicknesses of 10 mm, 20 mm and 30 mm were used between the date palm fiber sample and the rigid backing of the impedance tube. The results showed that when facing the palm date fiber sample with perforated plate the sound absorption coefficient improved at the higher and lower frequency ranges. This increase in sound absorption coincided with reduction in medium frequency absorption. However, this could be improved by using different densities or perforated plate with the date palm fiber panel.

  16. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  17. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  18. Long-term memory of heterospecific vocalizations by African lions

    Science.gov (United States)

    Grinnell, Jon; van Dyk, Gus; Slotow, Rob

    2005-09-01

    Animals that use and evaluate long-distance signals have the potential to glean valuable information about others in their environment via eavesdropping. In those areas where they coexist, African lions (Panthera leo) are a significant eavesdropper on spotted hyenas (Crocuta crocuta), often using hyena vocalizations to locate and scavenge from hyena kills. This relationship was used to test African lions' long-term memory of the vocalizations of spotted hyenas via playback experiments. Hyena whoops and a control sound (Canis lupus howls) were played to three populations of lions in South Africa: (1) lions with past experience of spotted hyenas; (2) lions with current experience; and (3) lions with no experience. The results strongly suggest that lions have the cognitive ability to remember the vocalizations of spotted hyenas even after 10 years with no contact of any kind with them. Such long-term memory of heterospecific vocalizations may be widespread in species that gain fitness benefits from eavesdropping on others, but where such species are sympatric and often interact it may pass unrecognized as short-term memory instead.

  19. Behavioural changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish

    Directory of Open Access Journals (Sweden)

    Yik Yaw (Errol eNeo

    2015-02-01

    Full Text Available Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioural patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio. The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa can affect the swimming behaviour of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa and another with ambient noise levels (89 dB re 1 μPa. Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behaviour to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behaviour in any captive condition.

  20. The ecological and evolutionary consequences of noise-induced acoustic habitat loss

    Science.gov (United States)

    Tennessen, Jennifer Beissinger

    Anthropogenic threats are facilitating rapid environmental change and exerting novel pressures on the integrity of ecological patterns and processes. Currently, habitat loss is the leading factor contributing to global biodiversity loss. Noise created by human activities is nearly ubiquitous in terrestrial and marine systems, and causes acoustic habitat loss by interfering with species' abilities to freely send and receive critical acoustic biological information. My dissertation investigates how novel sounds from human activities affect ecological and evolutionary processes in space and time in marine and terrestrial systems, and how species may cope with this emerging novel pressure. Using species from both marine and terrestrial systems, I present results from a theoretical investigation, and four acoustic playback experiments combining laboratory studies and field trials, that reveal a range of eco-evolutionary consequences of noiseinduced acoustic habitat loss. First, I use sound propagation modeling to assess how marine shipping noise reduces communication space between mother-calf pairs of North Atlantic right whales (Eubalaena glacialis ), an important unit of an endangered species. I show that shipping noise poses significant challenges for mother-calf pairs, but that vocal compensation strategies can substantially improve communication space. Next, in a series of acoustic playback experiments I show that road traffic noise impairs breeding migration behavior and physiology of wood frogs (Lithobates sylvaticus ). This work reveals the first evidence that traffic noise elicits a physiological stress response and suppresses production of antimicrobial peptides (a component of the innate immune response) in anurans. Further, wood frogs from populations with a history of inhabiting noisy sites mounted reduced physiological stress responses to continuous traffic noise exposure. This research using wood frogs suggests that chronic traffic noise exposure has

  1. Search for fourth sound propagation in supersolid 4He

    International Nuclear Information System (INIS)

    Aoki, Y.; Kojima, H.; Lin, X.

    2008-01-01

    A systematic study is carried out to search for fourth sound propagation solid 4 He samples below 500 mK down to 40 mK between 25 and 56 bar using the techniques of heat pulse generator and titanium superconducting transition edge bolometer. If solid 4 He is endowed with superfluidity below 200 mK, as indicated by recent torsional oscillator experiments, theories predict fourth sound propagation in such a supersolid state. If found, fourth sound would provide convincing evidence for superfluidity and a new tool for studying the new phase. The search for a fourth sound-like mode is based on the response of the bolometers to heat pulses traveling through cylindrical samples of solids grown with different crystal qualities. Bolometers with increasing sensitivity are constructed. The heater generator amplitude is reduced to the sensitivity limit to search for any critical velocity effects. The fourth sound velocity is expected to vary as ∞ √ Ρ s /ρ. Searches for a signature in the bolometer response with such a characteristic temperature dependence are made. The measured response signal has not so far revealed any signature of a new propagating mode within a temperature excursion of 5 μK from the background signal shape. Possible reasons for this negative result are discussed. Prior to the fourth sound search, the temperature dependence of heat pulse propagation was studied as it transformed from 'second sound' in the normal solid 4 He to transverse ballistic phonon propagation. Our work extends the studies of [V. Narayanamurti and R. C. Dynes, Phys. Rev. B 12, 1731 (1975)] to higher pressures and to lower temperatures. The measured transverse ballistic phonon propagation velocity is found to remain constant (within the 0.3% scatter of the data) below 100 mK at all pressures and reveals no indication of an onset of supersolidity. The overall dynamic thermal response of solid to heat input is found to depend strongly on the sample preparation procedure

  2. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  3. Time measurements with a mobile device using sound

    Science.gov (United States)

    Wisman, Raymond F.; Spahn, Gabriel; Forinash, Kyle

    2018-05-01

    Data collection is a fundamental skill in science education, one that students generally practice in a controlled setting using equipment only available in the classroom laboratory. However, using smartphones with their built-in sensors and often free apps, many fundamental experiments can be performed outside the laboratory. Taking advantage of these tools often require creative approaches to data collection and exploring alternative strategies for experimental procedures. As examples, we present several experiments using smartphones and apps that record and analyze sound to measure a variety of physical properties.

  4. Dog-directed speech: why do we use it and do dogs pay attention to it?

    Science.gov (United States)

    Ben-Aderet, Tobey; Gallego-Abenza, Mario; Reby, David; Mathevon, Nicolas

    2017-01-11

    Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners. © 2017 The Author(s).

  5. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  6. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  7. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  8. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  9. How Pleasant Sounds Promote and Annoying Sounds Impede Health : A Cognitive Approach

    NARCIS (Netherlands)

    Andringa, Tjeerd C.; Lanser, J. Jolie L.

    2013-01-01

    This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of

  10. Of Sound Mind: Mental Distress and Sound in Twentieth-Century Media Culture

    NARCIS (Netherlands)

    Birdsall, C.; Siewert, S.

    2013-01-01

    This article seeks to specify the representation of mental disturbance in sound media during the twentieth century. It engages perspectives on societal and technological change across the twentieth century as crucial for aesthetic strategies developed in radio and sound film production. The analysis

  11. Vibrotactile Identification of Signal-Processed Sounds from Environmental Events Presented by a Portable Vibrator: A Laboratory Study

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: To evaluate different signal-processing algorithms for tactile identification of environmental sounds in a monitoring aid for the deafblind. Two men and three women, sensorineurally deaf or profoundly hearing impaired with experience of vibratory experiments, age 22-36 years. Methods: A closed set of 45 representative environmental sounds were processed using two transposing (TRHA, TR1/3 and three modulating algorithms (AM, AMFM, AMMC and presented as tactile stimuli using a portable vibrator in three experiments. The algorithms TRHA, TR1/3, AMFM and AMMC had two alternatives (with and without adaption to vibratory thresholds. In Exp. 1, the sounds were preprocessed and directly fed to the vibrator. In Exp. 2 and 3, the sounds were presented in an acoustic test room, without or with background noise (SNR=+5 dB, and processed in real time. Results: In Exp. 1, Algorithm AMFM and AMFM(A consistently had the lowest identification scores, and were thus excluded in Exp. 2 and 3. TRHA, AM, AMMC, and AMMC(A showed comparable identification scores (30%-42% and the addition of noise did not deteriorate the performance. Discussion: Algorithm TRHA, AM, AMMC, and AMMC(A showed good performance in all three experiments and were robust in noise they can therefore be used in further testing in real environments.

  12. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  13. A Study to Interpret the Biological Significance of Behavior Associated with 3S Experimental Sonar Exposures

    Science.gov (United States)

    2015-09-30

    species; 2.) quantitative comparison of behavior, and behavioral changes, during sonar presentation and playback of killer whale sounds across the 3S... foraging dives were pre-classified from the remaining dives first by determining a break-point depth in the depth versus duration relationship, and then...AIC point to divide dive depth versus duration relationships (Fig. 2). 50.8% of dives greater than 15m in depth were classified as foraging dives

  14. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  15. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  16. Coherence of the irrelevant-sound effect: individual profiles of short-term memory and susceptibility to task-irrelevant materials.

    Science.gov (United States)

    Elliott, Emily M; Cowan, Nelson

    2005-06-01

    We examined individual and developmental differences in the disruptive effects of irrelevant sounds on serial recall of printed lists. In Experiment 1, we examined adults (N = 205) receiving eight-item lists to be recalled. Although their susceptibility to disruption of recall by irrelevant sounds was only slightly related to memory span, regression analyses documented highly reliable individual differences in this susceptibility across speech and tone distractors, even with variance from span level removed. In Experiment 2, we examined adults (n = 64) and 8-year-old children (n = 63) receiving lists of a length equal to a predetermined span and one item shorter (span-1). We again found significant relationships between measures of span and susceptibility to irrelevant sounds, although in only two of the measures. We conclude that some of the cognitive processes helpful in performing a span task may not be beneficial in the presence of irrelevant sounds.

  17. Description and Flight Performance Results of the WASP Sounding Rocket

    Science.gov (United States)

    De Pauw, J. F.; Steffens, L. E.; Yuska, J. A.

    1968-01-01

    A general description of the design and construction of the WASP sounding rocket and of the performance of its first flight are presented. The purpose of the flight test was to place the 862-pound (391-kg) spacecraft above 250 000 feet (76.25 km) on free-fall trajectory for at least 6 minutes in order to study the effect of "weightlessness" on a slosh dynamics experiment. The WASP sounding rocket fulfilled its intended mission requirements. The sounding rocket approximately followed a nominal trajectory. The payload was in free fall above 250 000 feet (76.25 km) for 6.5 minutes and reached an apogee altitude of 134 nautical miles (248 km). Flight data including velocity, altitude, acceleration, roll rate, and angle of attack are discussed and compared to nominal performance calculations. The effect of residual burning of the second stage motor is analyzed. The flight vibration environment is presented and analyzed, including root mean square (RMS) and power spectral density analysis.

  18. The effect of sound speed profile on shallow water shipping sound maps

    NARCIS (Netherlands)

    Sertlek, H.Ö.; Binnerts, B.; Ainslie, M.A.

    2016-01-01

    Sound mapping over large areas can be computationally expensive because of the large number of sources and large source-receiver separations involved. In order to facilitate computation, a simplifying assumption sometimes made is to neglect the sound speed gradient in shallow water. The accuracy of

  19. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  20. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  1. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  2. A digital-type fluxgate magnetometer using a sigma-delta digital-to-analog converter for a sounding rocket experiment

    International Nuclear Information System (INIS)

    Iguchi, Kyosuke; Matsuoka, Ayako

    2014-01-01

    One of the design challenges for future magnetospheric satellite missions is optimizing the mass, size, and power consumption of the instruments to meet the mission requirements. We have developed a digital-type fluxgate (DFG) magnetometer that is anticipated to have significantly less mass and volume than the conventional analog-type. Hitherto, the lack of a space-grade digital-to-analog converter (DAC) with good accuracy has prevented the development of a high-performance DFG. To solve this problem, we developed a high-resolution DAC using parts whose performance was equivalent to existing space-grade parts. The developed DAC consists of a 1-bit second-order sigma-delta modulator and a fourth-order analog low-pass filter. We tested the performance of the DAC experimentally and found that it had better than 17-bits resolution in 80% of the measurement range, and the linearity error was 2 −13.3  of the measurement range. We built a DFG flight model (in which this DAC was embedded) for a sounding rocket experiment as an interim step in the development of a future satellite mission. The noise of this DFG was 0.79 nT rms  at 0.1–10 Hz, which corresponds to a roughly 17-bit resolution. The results show that the sigma-delta DAC and the DFG had a performance that is consistent with our optimized design, and the noise was as expected from the noise simulation. Finally, we have confirmed that the DFG worked successfully during the flight of the sounding rocket. (paper)

  3. A digital-type fluxgate magnetometer using a sigma-delta digital-to-analog converter for a sounding rocket experiment

    Science.gov (United States)

    Iguchi, Kyosuke; Matsuoka, Ayako

    2014-07-01

    One of the design challenges for future magnetospheric satellite missions is optimizing the mass, size, and power consumption of the instruments to meet the mission requirements. We have developed a digital-type fluxgate (DFG) magnetometer that is anticipated to have significantly less mass and volume than the conventional analog-type. Hitherto, the lack of a space-grade digital-to-analog converter (DAC) with good accuracy has prevented the development of a high-performance DFG. To solve this problem, we developed a high-resolution DAC using parts whose performance was equivalent to existing space-grade parts. The developed DAC consists of a 1-bit second-order sigma-delta modulator and a fourth-order analog low-pass filter. We tested the performance of the DAC experimentally and found that it had better than 17-bits resolution in 80% of the measurement range, and the linearity error was 2-13.3 of the measurement range. We built a DFG flight model (in which this DAC was embedded) for a sounding rocket experiment as an interim step in the development of a future satellite mission. The noise of this DFG was 0.79 nTrms at 0.1-10 Hz, which corresponds to a roughly 17-bit resolution. The results show that the sigma-delta DAC and the DFG had a performance that is consistent with our optimized design, and the noise was as expected from the noise simulation. Finally, we have confirmed that the DFG worked successfully during the flight of the sounding rocket.

  4. Interface for Barge-in Free Spoken Dialogue System Based on Sound Field Reproduction and Microphone Array

    Directory of Open Access Journals (Sweden)

    Hinamoto Yoichi

    2007-01-01

    Full Text Available A barge-in free spoken dialogue interface using sound field control and microphone array is proposed. In the conventional spoken dialogue system using an acoustic echo canceller, it is indispensable to estimate a room transfer function, especially when the transfer function is changed by various interferences. However, the estimation is difficult when the user and the system speak simultaneously. To resolve the problem, we propose a sound field control technique to prevent the response sound from being observed. Combined with a microphone array, the proposed method can achieve high elimination performance with no adaptive process. The efficacy of the proposed interface is ascertained in the experiments on the basis of sound elimination and speech recognition.

  5. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  6. Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.

    Science.gov (United States)

    Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian

    2010-03-01

    In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.

  7. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  8. An X-ray Experiment with Two-Stage Korean Sounding Rocket

    Directory of Open Access Journals (Sweden)

    Uk-Won Nam

    1998-12-01

    Full Text Available The test result of the X-ray observation system is presented which have been developed at Korea Astronomy Observatory for 3 years (1995-1997. The instrument, which is composed of detector and signal processing parts, is designed for the future observations of compact X-ray sources. The performance of the instrument was tested by mounting on the two-stage Korean Sounding Rocket, which was launched from Taean rocket flight center on June 11 at 10:00 KST 1998. Telemetry data were received from individual parts of the instrument for 32 and 55.7 sec, respectively, since the launch of the rocket. In this paper, the result of the data analysis based on the telemetry data and discussion about the performance of the instrument is reported.

  9. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  10. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  11. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  12. Sound Synthesis and Evaluation of Interactive Footsteps and Environmental Sounds Rendering for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-01-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based ...... a soundscape significantly improves the recognition of the simulated environment....

  13. Office noise: Can headphones and masking sound attenuate distraction by background speech?

    Science.gov (United States)

    Jahncke, Helena; Björkeholm, Patrik; Marsh, John E; Odelius, Johan; Sörqvist, Patrik

    2016-11-22

    Background speech is one of the most disturbing noise sources at shared workplaces in terms of both annoyance and performance-related disruption. Therefore, it is important to identify techniques that can efficiently protect performance against distraction. It is also important that the techniques are perceived as satisfactory and are subjectively evaluated as effective in their capacity to reduce distraction. The aim of the current study was to compare three methods of attenuating distraction from background speech: masking a background voice with nature sound through headphones, masking a background voice with other voices through headphones and merely wearing headphones (without masking) as a way to attenuate the background sound. Quiet was deployed as a baseline condition. Thirty students participated in an experiment employing a repeated measures design. Performance (serial short-term memory) was impaired by background speech (1 voice), but this impairment was attenuated when the speech was masked - and in particular when it was masked by nature sound. Furthermore, perceived workload was lowest in the quiet condition and significantly higher in all other sound conditions. Notably, the headphones tested as a sound-attenuating device (i.e. without masking) did not protect against the effects of background speech on performance and subjective work load. Nature sound was the only masking condition that worked as a protector of performance, at least in the context of the serial recall task. However, despite the attenuation of distraction by nature sound, perceived workload was still high - suggesting that it is difficult to find a masker that is both effective and perceived as satisfactory.

  14. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  15. Experimental Investigation of Propagation and Reflection Phenomena in Finite Amplitude Sound Beams.

    Science.gov (United States)

    Averkiou, Michalakis Andrea

    Measurements of finite amplitude sound beams are compared with theoretical predictions based on the KZK equation. Attention is devoted to harmonic generation and shock formation related to a variety of propagation and reflection phenomena. Both focused and unfocused piston sources were used in the experiments. The nominal source parameters are piston radii of 6-25 mm, frequencies of 1-5 MHz, and focal lengths of 10-20 cm. The research may be divided into two parts: propagation and reflection of continuous-wave focused sound beams, and propagation of pulsed sound beams. In the first part, measurements of propagation curves and beam patterns of focused pistons in water, both in the free field and following reflection from curved targets, are presented. The measurements are compared with predictions from a computer model that solves the KZK equation in the frequency domain. A novel method for using focused beams to measure target curvature is developed. In the second part, measurements of pulsed sound beams from plane pistons in both water and glycerin are presented. Very short pulses (less than 2 cycles), tone bursts (5-30 cycles), and frequency modulated (FM) pulses (10-30 cycles) were measured. Acoustic saturation of pulse propagation in water is investigated. Self-demodulation of tone bursts and FM pulses was measured in glycerin, both in the near and far fields, on and off axis. All pulse measurements are compared with numerical results from a computer code that solves the KZK equation in the time domain. A quasilinear analytical solution for the entire axial field of a self-demodulating pulse is derived in the limit of strong absorption. Taken as a whole, the measurements provide a broad data base for sound beams of finite amplitude. Overall, outstanding agreement is obtained between theory and experiment.

  16. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  17. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    a health effect has not been documented. In this context, ANSES recommends: Concerning studies and research: - verifying whether or not there is a possible mechanism modulating the perception of audible sound at intensities of infra-sound similar to those measured from local residents; - studying the effects of the amplitude modulation of the acoustic signal on the noise-related disturbance felt; - studying the assumption that cochlea-vestibular effects may be responsible for pathophysiological effects; - undertaking a survey of residents living near wind farms enabling the identification of an objective signature of a physiological effect. Concerning information for local residents and the monitoring of noise levels: - enhancing information for local residents during the construction of wind farms and participation in public inquiries undertaken in rural areas; - systematically measuring the noise emissions of wind turbines before and after they are brought into service; - setting up, especially in the event of controversy, continuous noise measurement systems around wind farms (based on experience at airports, for example). Lastly, the Agency reiterates that the current regulations state that the distance between a wind turbine and the first home should be evaluated on a case-by-case basis, taking the conditions of wind farms into account. This distance, of at least 500 metres, may be increased further to the results of an impact assessment, in order to comply with the limit values for noise exposure. Current knowledge of the potential health effects of exposure to infra-sounds and low-frequency noise provides no justification for changing the current limit values or for extending the spectrum of noise currently taken into consideration

  18. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  19. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  20. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined