WorldWideScience

Sample records for learning sound recording

  1. Learning with Sound Recordings: A History of Suzuki's Mediated Pedagogy

    Science.gov (United States)

    Thibeault, Matthew D.

    2018-01-01

    This article presents a history of mediated pedagogy in the Suzuki Method, the first widespread approach to learning an instrument in which sound recordings were central. Media are conceptualized as socially constituted: philosophical ideas, pedagogic practices, and cultural values that together form a contingent and changing technological…

  2. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  3. Sound and recording applications and theory

    CERN Document Server

    Rumsey, Francis

    2014-01-01

    Providing vital reading for audio students and trainee engineers, this guide is ideal for anyone who wants a solid grounding in both theory and industry practices in audio, sound and recording. There are many books on the market covering ""how to work it"" when it comes to audio equipment-but Sound and Recording isn't one of them. Instead, you'll gain an understanding of ""how it works"" with this approachable guide to audio systems.New to this edition:Digital audio section revised substantially to include the latest developments in audio networking (e.g. RAVENNA, AES X-192, AVB), high-resolut

  4. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  5. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  6. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  7. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  8. Machine learning analysis of binaural rowing sounds

    DEFF Research Database (Denmark)

    Johard, Leonard; Ruffaldi, Emanuele; Hoffmann, Pablo F.

    2011-01-01

    Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition metho...... methodology and the evaluation of different machine learning techniques for classifying rowing-sound data. We see that a combination of principal component analysis and shallow networks perform equally well as deep architectures, while being much faster to train.......Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition...

  9. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    Science.gov (United States)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  10. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  11. Learning about the Dynamic Sun through Sounds

    Science.gov (United States)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  12. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  13. Records for learning

    DEFF Research Database (Denmark)

    Binder, Thomas

    2005-01-01

    The article present and discuss findings from a participatory development of new learning practices among intensive care nurses, with an emphasize on the role of place making in informal learning activities.......The article present and discuss findings from a participatory development of new learning practices among intensive care nurses, with an emphasize on the role of place making in informal learning activities....

  14. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  15. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  16. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  17. 75 FR 3666 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-01-22

    ... additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound recordings.... 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in general... Collective, late fees, statements of account, audit and verification of royalty payments and distributions...

  18. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  19. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population.

    Science.gov (United States)

    Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe

    2016-03-01

    Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. 37 CFR 380.3 - Royalty fees for the public performance of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... EPHEMERAL REPRODUCTIONS § 380.3 Royalty fees for the public performance of sound recordings and for ephemeral recordings. (a) Royalty rates and fees for eligible digital transmissions of sound recordings made...

  1. Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Nonaka, Yulri; Katahira, Kentaro; Okanoya, Kazuo

    2011-11-01

    The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants.

  2. Sound-symbolism boosts novel word learning

    NARCIS (Netherlands)

    Lockwood, G.F.; Dingemanse, M.; Hagoort, P.

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally

  3. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  4. The Technique of the Sound Studio: Radio, Record Production, Television, and Film. Revised Edition.

    Science.gov (United States)

    Nisbett, Alec

    Detailed explanations of the studio techniques used in radio, record, television, and film sound production are presented in as non-technical language as possible. An introductory chapter discusses the physics and physiology of sound. Subsequent chapters detail standards for sound control in the studio; explain the planning and routine of a sound…

  5. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  6. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  7. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  8. Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces.

    Science.gov (United States)

    Ekman, Maria Rådsten; Lundén, Peter; Nilsson, Mats E

    2015-11-01

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-field microphone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  9. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    Science.gov (United States)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  10. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  11. Synchronized tapping facilitates learning sound sequences as indexed by the P300.

    Science.gov (United States)

    Kamiyama, Keiko S; Okanoya, Kazuo

    2014-01-01

    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.

  12. Multi-Century Record of Anthropogenic Impacts on an Urbanized Mesotidal Estuary: Salem Sound, MA

    Science.gov (United States)

    Salem, MA, located north of Boston, has a rich, well-documented history dating back to settlement in 1626 CE, but the associated anthropogenic impacts on Salem Sound are poorly constrained. This project utilized dated sediment cores from the sound to assess the proxy record of an...

  13. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Science.gov (United States)

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  14. 37 CFR 270.1 - Notice of use of sound recordings under statutory license.

    Science.gov (United States)

    2010-07-01

    ..., and the primary purpose of the service is not to sell, advertise, or promote particular products or services other than sound recordings, live concerts, or other music-related events. (iv) A new subscription...

  15. 37 CFR 261.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... § 261.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) For the period October 28, 1998, through December 31, 2002, royalty rates and fees for eligible digital...

  16. 37 CFR 262.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... MAKING OF EPHEMERAL REPRODUCTIONS § 262.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) Basic royalty rate. Royalty rates and fees for eligible nonsubscription...

  17. 37 CFR 382.12 - Royalty fees for the public performance of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... Preexisting Satellite Digital Audio Radio Services § 382.12 Royalty fees for the public performance of sound recordings and the making of ephemeral recordings. (a) In general. The monthly royalty fee to be paid by a...

  18. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  19. Incidental learning of sound categories is impaired in developmental dyslexia.

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights

  20. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  1. Graphic recording of heart sounds in height native subjects

    OpenAIRE

    Rotta, Andrés; Ascenzo C., Jorge

    2014-01-01

    The phonocardiograms series obtained from normal subjects show that it is not always possible to record the noises Headset and 3rd, giving diverse enrollment rates by different authors. The reason why the graphic registration fails these noises largely normal individuals has not yet been explained in concrete terms, but allowed different influencing factors such as age, determinants of noises, terms of transmissibility chest wall sensitivity of the recording apparatus, etc. Los fonocardiog...

  2. 37 CFR 383.3 - Royalty fees for public performances of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... SUBSCRIPTION SERVICES § 383.3 Royalty fees for public performances of sound recordings and the making of... regulations for all years 2007 and earlier. Such fee shall be recoupable and credited against royalties due in...

  3. Beaming teaching application: recording techniques for spatial xylophone sound rendering

    DEFF Research Database (Denmark)

    Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup

    2012-01-01

    BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...... to spatial improvements mainly in terms of the Apparent Source Width (ASW). Rendered examples are subjectively evaluated in listening tests by comparing them with binaural recording....

  4. Design of an Automatic Octave Sound Analyzer and Recorder

    Science.gov (United States)

    1942-11-21

    e Fredric Flader Henry K. Growald Mr. A. E. Raymond Mr. E. P. Wheaton El Segundo, California Mr. Paul Dennis Fairchild Aircraft Division...Dr. E. B« I-’oots Dr. R. H. Nichols, Jr\\ Mr. H. ■v. RudiTiose Mr. R. L. Wallace , Jr. Dr. P. M. Wiener Mr. H. F. Dienel Mr. H. L. Eri c :J on Mr...25 43 I / Recorder Motor "ON-OFF Switch^//’] \\ ^ti Indexing Switch Mazda 47 B j MUT Pilot Light -Jjv k4: 10A. \\A Start Switch ch

  5. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    Science.gov (United States)

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  6. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  7. 37 CFR 370.3 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  8. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Keyimage Method of Learning Sound-Symbol Correspondences: A Case Study of Learning Written Khmer

    Directory of Open Access Journals (Sweden)

    Elizabeth Lavolette

    2009-01-01

    Full Text Available I documented my strategies for learning sound-symbol correspondences during a Khmer course. I used a mnemonic strategy that I call the keyimage method. In this method, a character evokes an image (the keyimage, which evokes the corresponding sound. For example, the keyimage for the character 2 could be a swan with its head tucked in. This evokes the sound "kaw" that a swan makes, which sounds similar to the Khmer sound corresponding to 2. The method has some similarities to the keyword method. Considering the results of keyword studies, I hypothesize that the keyimage method is more effective than rote learning and that peer-generated keyimages are more effective than researcher- or teacher-generated keyimages, which are more effective than learner-generated ones. In Dr. Andrew Cohen's plenary presentation at the Hawaii TESOL 2007 conference, he mentioned that more case studies are needed on learning strategies (LSs. One reason to study LSs is that what learners do with input to produce output is unclear, and knowing what strategies learners use may help us understand that process (Dornyei, 2005, p. 170. Hopefully, we can use that knowledge to improve language learning, perhaps by teaching learners to use the strategies that we find. With that in mind, I have examined the LSs that I used in studying Khmer as a foreign language, focusing on learning the syllabic alphabet.

  10. 75 FR 14074 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-03-24

    ...). The additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound... Sec. 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in... payments to the Collective, late fees, statements of account, audit and verification of royalty payments...

  11. 76 FR 56483 - Distribution of 2010 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2011-09-13

    ... responses to the motion to ascertain whether any claimant entitled to receive such royalty fees has a... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2011-6 CRB DD 2010] Distribution of 2010 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  12. 77 FR 47120 - Distribution of 2011 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2012-08-07

    ... the motion to ascertain whether any claimant entitled to receive such royalty fees has a reasonable... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2012-3 CRB DD 2011] Distribution of 2011 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  13. 76 FR 45695 - Notice and Recordkeeping for Use of Sound Recordings Under Statutory License

    Science.gov (United States)

    2011-08-01

    ... operating under these licenses are required to, among other things, pay royalty fees and report to copyright... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Parts 370 and 382 [Docket No. RM 2011-5] Notice and Recordkeeping for Use of Sound Recordings Under Statutory License AGENCY: Copyright Royalty Board...

  14. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  15. Usability of Computerized Lung Auscultation-Sound Software (CLASS) for learning pulmonary auscultation.

    Science.gov (United States)

    Machado, Ana; Oliveira, Ana; Jácome, Cristina; Pereira, Marco; Moreira, José; Rodrigues, João; Aparício, José; Jesus, Luis M T; Marques, Alda

    2018-04-01

    The mastering of pulmonary auscultation requires complex acoustic skills. Computer-assisted learning tools (CALTs) have potential to enhance the learning of these skills; however, few have been developed for this purpose and do not integrate all the required features. Thus, this study aimed to assess the usability of a new CALT for learning pulmonary auscultation. Computerized Lung Auscultation-Sound Software (CLASS) usability was assessed by eight physiotherapy students using computer screen recordings, think-aloud reports, and facial expressions. Time spent in each task, frequency of messages and facial expressions, number of clicks and problems reported were counted. The timelines of the three methods used were matched/synchronized and analyzed. The tasks exercises and annotation of respiratory sounds were the ones requiring more clicks (median 132, interquartile range [23-157]; 93 [53-155]; 91 [65-104], respectively) and where most errors (19; 37; 15%, respectively) and problems (n = 7; 6; 3, respectively) were reported. Each participant reported a median of 6 problems, with a total of 14 different problems found, mainly related with CLASS functionalities (50%). Smile was the only facial expression presented in all tasks (n = 54). CLASS is the only CALT available that meets all the required features for learning pulmonary auscultation. The combination of the three usability methods identified advantages/disadvantages of CLASS and offered guidance for future developments, namely in annotations and exercises. This will allow the improvement of CLASS and enhance students' activities for learning pulmonary auscultation skills.

  16. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  17. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  18. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  19. 37 CFR 260.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d..., 2007, a Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U...

  20. Surround by Sound: A Review of Spatial Audio Recording and Reproduction

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2017-05-01

    Full Text Available In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problems.

  1. 75 FR 16377 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-04-01

    ...). Petitions to Participate were received from: Intercollegiate Broadcast System, Inc./ Harvard Radio...), respectively, and the references to January 1, 2009, have been deleted. Next, for the reasons stated above in... State. (j) Retention of records. Books and records of a Broadcaster and of the Collective relating to...

  2. Enabling Teachers to Develop Pedagogically Sound and Technically Executable Learning Designs

    NARCIS (Netherlands)

    Miao, Yongwu; Van der Klink, Marcel; Boon, Jo; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Van der Klink, M., Boon, J., Sloep, P. B., & Koper, R. (2009). Enabling Teachers to Develop Pedagogically Sound and Technically Executable Learning Designs [Special issue: Learning Design]. Distance Education, 30(2), 259-276.

  3. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  4. Acoustic analysis of snoring sounds recorded with a smartphone according to obstruction site in OSAS patients.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Kim, Yang Jae; Moon, J I Seung; Kim, Young Jun; Jung, Sung Hoon

    2017-03-01

    Snoring is a sign of increased upper airway resistance and is the most common symptom suggestive of obstructive sleep apnea. Acoustic analysis of snoring sounds is a non-invasive diagnostic technique and may provide a screening test that can determine the location of obstruction sites. We recorded snoring sounds according to obstruction level, measured by DISE, using a smartphone and focused on the analysis of formant frequencies. The study group comprised 32 male patients (mean age 42.9 years). The spectrogram pattern, intensity (dB), fundamental frequencies (F 0 ), and formant frequencies (F 1 , F 2 , and F 3 ) of the snoring sounds were analyzed for each subject. On spectrographic analysis, retropalatal level obstruction tended to produce sharp and regular peaks, while retrolingual level obstruction tended to show peaks with a gradual onset and decay. On formant frequency analysis, F 1 (retropalatal level vs. retrolingual level: 488.1 ± 125.8 vs. 634.7 ± 196.6 Hz) and F 2 (retropalatal level vs. retrolingual level: 1267.3 ± 306.6 vs. 1723.7 ± 550.0 Hz) of retrolingual level obstructions showed significantly higher values than retropalatal level obstruction (p smartphone can be effective for recording snoring sounds.

  5. Computer analysis of sound recordings from two Anasazi sites in northwestern New Mexico

    Science.gov (United States)

    Loose, Richard

    2002-11-01

    Sound recordings were made at a natural outdoor amphitheater in Chaco Canyon and in a reconstructed great kiva at Aztec Ruins. Recordings included computer-generated tones and swept sine waves, classical concert flute, Native American flute, conch shell trumpet, and prerecorded music. Recording equipment included analog tape deck, digital minidisk recorder, and direct digital recording to a laptop computer disk. Microphones and geophones were used as transducers. The natural amphitheater lies between the ruins of Pueblo Bonito and Chetro Ketl. It is a semicircular arc in a sandstone cliff measuring 500 ft. wide and 75 ft. high. The radius of the arc was verified with aerial photography, and an acoustic ray trace was generated using cad software. The arc is in an overhanging cliff face and brings distant sounds to a line focus. Along this line, there are unusual acoustic effects at conjugate foci. Time history analysis of recordings from both sites showed that a 60-dB reverb decay lasted from 1.8 to 2.0 s, nearly ideal for public performances of music. Echoes from the amphitheater were perceived to be upshifted in pitch, but this was not seen in FFT analysis. Geophones placed on the floor of the great kiva showed a resonance at 95 Hz.

  6. Enabling People Who Are Blind to Experience Science Inquiry Learning through Sound-Based Mediation

    Science.gov (United States)

    Levy, S. T.; Lahav, O.

    2012-01-01

    This paper addresses a central need among people who are blind, access to inquiry-based science learning materials, which are addressed by few other learning environments that use assistive technologies. In this study, we investigated ways in which learning environments based on sound mediation can support science learning by blind people. We used…

  7. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    Science.gov (United States)

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  8. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  9. NOAA Climate Data Record (CDR) of Advanced Microwave Sounding Unit (AMSU)-A Brightness Temperature, Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Climate Data Record (CDR) for Advanced Microwave Sounding Unit-A (AMSU-A) brightness temperature in "window channels". The data cover a time period from...

  10. [Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918] / Tõnu Tannberg

    Index Scriptorium Estoniae

    Tannberg, Tõnu, 1961-

    2013-01-01

    Arvustus: Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918 (Das Baltikum in Geschichte und Gegenwart, 5). Hrsg. von Jaan Ross. Böhlau Verlag. Köln, Weimar und Wien 2012

  11. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  12. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    Science.gov (United States)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  13. SMALLab: virtual geology studies using embodied learning with motion, sound, and graphics

    NARCIS (Netherlands)

    Johnson-Glenberg, M.C.; Birchfield, D.A.; Uysal, S.

    2009-01-01

    We present a new and innovative interface that allows the learner’s body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory (SMALLab) uses 3D object tracking, real time graphics, and surround‐sound to enhance embodied learning. Our hypothesis is

  14. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  15. Why live recording sounds better: A case study of Schumann’s Träumerei

    Directory of Open Access Journals (Sweden)

    Haruka eShoda

    2015-01-01

    Full Text Available We explore the concept that artists perform best in front of an audience. The negative effects of performance anxiety are much better known than their related cousin on the other shoulder: the positive effects of social facilitation. The present study, however, reveals a listener's preference for performances recorded in front of an audience. In Study 1, we prepared two types of recordings of Träumerei performed by 13 pianists: recordings in front of an audience and those with no audience. According to the evaluation by 153 listeners, the recordings performed in front of an audience sounded better, suggesting that the presence of an audience enhanced or facilitated the performance. In Study 2, we analyzed pianists' durational and dynamic expressions. According to the functional principal components analyses, we found that the expression of Träumerei consisted of three components: the overall quantity, the cross-sectional contrast between the final and the remaining sections, and the control of the expressive variability. Pianists' expressions were targeted more to the average of the cross-sectional variation in the audience-present than in the audience-absent recordings. In Study 3, we explored a model that explained listeners' responses induced by pianists' acoustical expressions, using path analyses. The final model indicated that the cross-sectional variation of the duration and that of the dynamics determined listeners' evaluations of the quality and the emotionally moving experience, respectively. In line with human's preferences for commonality, the more average the durational expressions were in live recording, the better the listeners' evaluations were regardless of their musical experiences. Only the well-experienced listeners (at least 16 years of musical training were moved more by the deviated dynamic expressions in live recording, suggesting a link between the experienced listener's emotional experience and the unique dynamics in

  16. Learning language with the wrong neural scaffolding: The cost of neural commitment to sounds.

    Directory of Open Access Journals (Sweden)

    Amy Sue Finn

    2013-11-01

    Full Text Available Does tuning to one’s native language explain the sensitive period for language learning? We explore the idea that tuning to (or becoming more selective for the properties of one’s native-language could result in being less open (or plastic for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure has an impact on the neural representation of a later-learned aspect (grammar. English-speaking adults learned one of two miniature artificial languages over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG. Across learners, recruitment of IFG (but not STG predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.

  17. Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.; Ettlinger, Marc; Vytlacil, Jason; D'Esposito, Mark

    2013-01-01

    Does tuning to one's native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one's native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults' difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language. PMID:24273497

  18. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  19. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  20. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  1. Letter-speech sound learning in children with dyslexia : From behavioral research to clinical practice

    NARCIS (Netherlands)

    Aravena, S.

    2017-01-01

    In alphabetic languages, learning to associate speech-sounds with unfamiliar characters is a critical step in becoming a proficient reader. This dissertation aimed at expanding our knowledge of this learning process and its relation to dyslexia, with an emphasis on bridging the gap between

  2. Effects of providing word sounds during printed word learning

    NARCIS (Netherlands)

    Reitsma, P.; Dongen, van A.J.N.; Custers, E.

    1984-01-01

    The purpose of this study was to explore the effects of the availability of the spoken sound of words along with the printed forms during reading practice. Firstgrade children from two normal elementary schools practised reading several unfamiliar words in print. For half of the printed words the

  3. Automatic Segmentation and Deep Learning of Bird Sounds

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Van Balen, J.M.H.; Wiering, F.

    2015-01-01

    We present a study on automatic birdsong recognition with deep neural networks using the BIRDCLEF2014 dataset. Through deep learning, feature hierarchies are learned that represent the data on several levels of abstraction. Deep learning has been applied with success to problems in fields such as

  4. [Effect of early scream sound stress on learning and memory in female rats].

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Huang, Chen

    2015-12-01

    To investigate the effect of early scream sound stress on the ability of spatial learning and memory, the levels of norepinephrine (NE) and corticosterone (CORT) in serum, and the morphology of adrenal gland.
 Female Sprague-Dawley (SD) rats were treated daily with scream sound from postnatal day 1(P1) for 21 d. Morris water maze was used to measure the spatial learning and memory ability. The levels of serum NE and CORT were determined by radioimmunoassay. Adrenal gland of SD rats was collected and fixed in formalin, and then embedded with paraffin. The morphology of adrenal gland was observed by HE staining.
 Exposure to early scream sound decreased latency of escape and increased times to cross the platform in Morris water maze test (Psound stress can enhance spatial learning and memory ability in adulthood, which is related to activation of the hypothalamo-pituitary-adrenal axis and sympathetic nervous system.

  5. 37 CFR 382.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.2 Royalty fees for the... monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d)(2) and the...

  6. Spontaneous brain activity predicts learning ability of foreign sounds.

    Science.gov (United States)

    Ventura-Campos, Noelia; Sanjuán, Ana; González, Julio; Palomar-García, María-Ángeles; Rodríguez-Pujadas, Aina; Sebastián-Gallés, Núria; Deco, Gustavo; Ávila, César

    2013-05-29

    Can learning capacity of the human brain be predicted from initial spontaneous functional connectivity (FC) between brain areas involved in a task? We combined task-related functional magnetic resonance imaging (fMRI) and resting-state fMRI (rs-fMRI) before and after training with a Hindi dental-retroflex nonnative contrast. Previous fMRI results were replicated, demonstrating that this learning recruited the left insula/frontal operculum and the left superior parietal lobe, among other areas of the brain. Crucially, resting-state FC (rs-FC) between these two areas at pretraining predicted individual differences in learning outcomes after distributed (Experiment 1) and intensive training (Experiment 2). Furthermore, this rs-FC was reduced at posttraining, a change that may also account for learning. Finally, resting-state network analyses showed that the mechanism underlying this reduction of rs-FC was mainly a transfer in intrinsic activity of the left frontal operculum/anterior insula from the left frontoparietal network to the salience network. Thus, rs-FC may contribute to predict learning ability and to understand how learning modifies the functioning of the brain. The discovery of this correspondence between initial spontaneous brain activity in task-related areas and posttraining performance opens new avenues to find predictors of learning capacities in the brain using task-related fMRI and rs-fMRI combined.

  7. The Use of Conceptual Change Text toward Students’ Argumentation Skills in Learning Sound

    Science.gov (United States)

    Sari, B. P.; Feranie, S.; Winarno, N.

    2017-09-01

    This research aim is to investigate the effect of Conceptual Change Text toward students’ argumentation skills in learning sound concept. The participant comes from one of International school in Bandung, Indonesia. The method that used in this research is a quasi-experimental design with one control group (N=21) and one experimental group (N=21) were involves in this research. The learning model that used in both classes is demonstration model which included teacher explanation and examples, the difference only in teaching materials. In experiment group learn with Conceptual Change Text, while control group learn with conventional book which is used in school. The results showed that Conceptual Change Text instruction was better than the conventional book to improved students’ argumentation skills of sound concept. Based on this results showed that Conceptual Change Text instruction can be an alternative tool to improve students’ argumentation skills significantly.

  8. Sound production in recorder-like instruments : II. a simulation model

    NARCIS (Netherlands)

    Verge, M.P.; Hirschberg, A.; Causse, R.

    1997-01-01

    A simple one-dimensional representation of recorderlike instruments, that can be used for sound synthesis by physical modeling of flutelike instruments, is presented. This model combines the effects on the sound production by the instrument of the jet oscillations, vortex shedding at the edge of the

  9. 75 FR 67777 - Copyright Office; Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Science.gov (United States)

    2010-11-03

    ... (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., spoken, or other sounds, but not including the sounds accompanying a motion picture or other audiovisual... general, Federal law is better defined, both as to the rights and the exceptions, and more consistent than...

  10. The Sounds of Picturebooks for English Language Learning

    Directory of Open Access Journals (Sweden)

    M. Teresa Fleta Guillén

    2017-05-01

    Full Text Available Picturebooks have long been recognised to aid language development in both first and second language acquisition. This paper investigates the relevance of the acoustic elements of picturebooks to raise phonological awareness and to fine-tune listening. In order to enhance the learners’ aural and oral skills for English language development, the paper proposes that listening to stories from picturebooks plays a most important role for raising awareness of the sound system of English in child second-language learners. To provide practical advice for teachers of young learners, this article describes the ways that picturebooks promote listening and speaking and develops criteria to select picturebooks for English instruction focusing on the acoustic elements of language.

  11. Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants

    Directory of Open Access Journals (Sweden)

    Yuko eYamashita

    2013-02-01

    Full Text Available The purpose of this study was to explore developmental changes, in terms of spectral fluctuations and temporal periodicity with Japanese- and English-learning infants. Three age groups (15, 20, and 24 months were selected, because infants diversify phonetic inventories with age. Natural speech of the infants was recorded. We utilized a critical-band-filter bank, which simulated the frequency resolution in adults’ auditory periphery. First, the correlations between the critical-band outputs represented by factor analysis were observed in order to see how the critical bands should be connected to each other, if a listener is to differentiate sounds in infants’ speech. In the following analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations. The present analysis identified three factors observed in adult speech at 24 months of age in both linguistic environments. These three factors were shifted to a higher frequency range corresponding to the smaller vocal tract size of the infants. The results suggest that the vocal tract structures of the infants had developed to become adult-like configuration by 24 months of age in both language environments. The amount of utterances with periodic nature of shorter time increased with age in both environments. This trend was clearer in the Japanese environment.

  12. Reconstruction of mechanically recorded sound from an edison cylinder using three dimensional non-contact optical surface metrology

    Energy Technology Data Exchange (ETDEWEB)

    Fadeyev, V.; Haber, C.; Maul, C.; McBride, J.W.; Golden, M.

    2004-04-20

    Audio information stored in the undulations of grooves in a medium such as a phonograph disc record or cylinder may be reconstructed, without contact, by measuring the groove shape using precision optical metrology methods and digital image processing. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image acquisition and analysis methods. The present work reports the first three dimensional reconstruction of mechanically recorded sound. The source material, a celluloid cylinder, was scanned using color coded confocal microscopy techniques and resulted in a faithful playback of the recorded information.

  13. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  14. Medical education of attention: A qualitative study of learning to listen to sound.

    Science.gov (United States)

    Harris, Anna; Flynn, Eleanor

    2017-01-01

    There has been little qualitative research examining how physical examination skills are learned, particularly the sensory and subjective aspects of learning. The authors set out to study how medical students are taught and learn the skills of listening to sound. As part of an ethnographic study in Melbourne, 15 semi-structured in-depth interviews were conducted with students and teachers as a way to reflect explicitly on their learning and teaching. From these interviews, we found that learning the skills of listening to lung sounds was frequently difficult for students, with many experiencing awkwardness, uncertainty, pressure, and intimidation. However not everyone found this process difficult. Often those who had studied music reported finding it easier to be attentive to the frequency and rhythm of body sounds and find ways to describe them. By incorporating, distinctively in medical education, theoretical insights into "attentiveness" from anthropology and science and technology studies, the article suggests that musical education provides medical students with skills in sensory awareness. Training the senses is a critical aspect of diagnosis that needs to be better addressed in medical education. Practical approaches for improving students' education of attention are proposed.

  15. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    Science.gov (United States)

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  16. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  17. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  18. Not so primitive: context-sensitive meta-learning about unattended sound sequences.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa R; Cooper, Gavin; Heathcote, Andrew

    2013-01-01

    Mismatch negativity (MMN), an evoked response potential elicited when a "deviant" sound violates a regularity in the auditory environment, is integral to auditory scene processing and has been used to demonstrate "primitive intelligence" in auditory short-term memory. Using a new multiple-context and -timescale protocol we show that MMN magnitude displays a context-sensitive modulation depending on changes in the probability of a deviant at multiple temporal scales. We demonstrate a primacy bias causing asymmetric evidence-based modulation of predictions about the environment, and we demonstrate that learning how to learn about deviant probability (meta-learning) induces context-sensitive variation in the accessibility of predictive long-term memory representations that underpin the MMN. The existence of the bias and meta-learning are consistent with automatic attributions of behavioral salience governing relevance-filtering processes operating outside of awareness.

  19. Gateway of Sound: Reassessing the Role of Audio Mastering in the Art of Record Production

    Directory of Open Access Journals (Sweden)

    Carlo Nardi

    2014-06-01

    Full Text Available Audio mastering, notwithstanding an apparent lack of scholarly attention, is a crucial gateway between production and consumption and, as such, is worth further scrutiny, especially in music genres like house or techno, which place great emphasis on sound production qualities. In this article, drawing on personal interviews with mastering engineers and field research in mastering studios in Italy and Germany, I investigate the practice of mastering engineering, paying close attention to the negotiation of techniques and sound aesthetics in relation to changes in the industry formats and, in particular, to the growing shift among DJs from vinyl to compressed digital formats. I then discuss the specificity of audio mastering in relation to EDM, insofar as DJs and controllerists conceive of the master, rather than as a finished product destined to listening, as raw material that can be reworked in performance.

  20. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0088063)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  1. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0077910)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  2. Records management: a basis for organizational learning and innovation

    Directory of Open Access Journals (Sweden)

    Francisco José Aragão Pedroza Cunha

    Full Text Available The understanding of (transformations related to organizational learning processes and knowledge recording can promote innovation. The objective of this study was to review the conceptual contributions of several studies regarding Organizational Learning and Records Management and to highlight the importance of knowledge records as an advanced management technique for the development and attainment of innovation. To accomplish this goal, an exploratory and multidisciplinary literature review was conducted. The results indicated that the identification and application of management models to represent knowledge is a challenge for organizations aiming to promote conditions for the creation and use of knowledge in order to transform it into organizational innovation. Organizations can create spaces and environments for local, regional, national, and global exchange with the strategic goal of generating and sharing knowledge, provided they know how to utilize Records Management mechanisms.

  3. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  4. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    Science.gov (United States)

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  5. Saved from the Teeth of Time. Folk music on historical sound recordings

    Czech Academy of Sciences Publication Activity Database

    Kratochvíl, Matěj

    2007-01-01

    Roč. 10, č. 3 (2007), s. 24-26 ISSN 1211-0264 Institutional research plan: CEZ:AV0Z90580513 Keywords : traditional music * recording * wax cylinders * Bohemian music * Moravian music Subject RIV: AC - Archeology, Anthropology, Ethnology

  6. SoundScapes: non-formal learning potentials from interactive VEs

    DEFF Research Database (Denmark)

    Brooks, Tony; Petersson, Eva

    2007-01-01

    Non-formal learning is evident from an inhabited information space that is created from non-invasive multi-dimensional sensor technologies that source human gesture. Libraries of intuitive interfaces empower natural interaction where the gesture is mapped to the multisensory content. Large screen...... and international bodies have consistently recognized SoundScapes which, as a research body of work, is directly responsible for numerous patents. Please note that my full name is Anthony Lewis Brooks. I publish with Anthony Brooks: A. L. Brooks; Tony Brooks.  ...

  7. Production of grooming-associated sounds by chimpanzees (Pan troglodytes) at Ngogo: variation, social learning, and possible functions.

    Science.gov (United States)

    Watts, David P

    2016-01-01

    Chimpanzees (Pan troglodytes) use some communicative signals flexibly and voluntarily, with use influenced by learning. These signals include some vocalizations and also sounds made using the lips, oral cavity, and/or teeth, but not the vocal tract, such as "attention-getting" sounds directed at humans by captive chimpanzees and lip smacking during social grooming. Chimpanzees at Ngogo, in Kibale National Park, Uganda, make four distinct sounds while grooming others. Here, I present data on two of these ("splutters" and "teeth chomps") and consider whether social learning contributes to variation in their production and whether they serve social functions. Higher congruence in the use of these two sounds between dyads of maternal relatives than dyads of non-relatives implies that social learning occurs and mostly involves vertical transmission, but the results are not conclusive and it is unclear which learning mechanisms may be involved. In grooming between adult males, tooth chomps and splutters were more likely in long than in short bouts; in bouts that were bidirectional rather than unidirectional; in grooming directed toward high-ranking males than toward low-ranking males; and in bouts between allies than in those between non-allies. Males were also more likely to make these sounds while they were grooming other males than while they were grooming females. These results are expected if the sounds promote social bonds and induce tolerance of proximity and of grooming by high-ranking males. However, the alternative hypothesis that the sounds are merely associated with motivation to groom, with no additional social function, cannot be ruled out. Limited data showing that bouts accompanied by teeth chomping or spluttering at their initiation were longer than bouts for which this was not the case point toward a social function, but more data are needed for a definitive test. Comparison to other research sites shows that the possible existence of grooming

  8. Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.

    Science.gov (United States)

    Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro

    2018-03-09

    Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.

  9. AGGLOMERATIVE CLUSTERING OF SOUND RECORD SPEECH SEGMENTS BASED ON BAYESIAN INFORMATION CRITERION

    Directory of Open Access Journals (Sweden)

    O. Yu. Kydashev

    2013-01-01

    Full Text Available This paper presents the detailed description of agglomerative clustering system implementation for speech segments based on Bayesian information criterion. Numerical experiment results with different acoustic features, as well as the full and diagonal covariance matrices application are given. The error rate DER equal to 6.4% for audio records of radio «Svoboda» was achieved by means of designed system.

  10. ERPs recorded during early second language exposure predict syntactic learning.

    Science.gov (United States)

    Batterink, Laura; Neville, Helen J

    2014-09-01

    Millions of adults worldwide are faced with the task of learning a second language (L2). Understanding the neural mechanisms that support this learning process is an important area of scientific inquiry. However, most previous studies on the neural mechanisms underlying L2 acquisition have focused on characterizing the results of learning, relying upon end-state outcome measures in which learning is assessed after it has occurred, rather than on the learning process itself. In this study, we adopted a novel and more direct approach to investigate neural mechanisms engaged during L2 learning, in which we recorded ERPs from beginning adult learners as they were exposed to an unfamiliar L2 for the first time. Learners' proficiency in the L2 was then assessed behaviorally using a grammaticality judgment task, and ERP data acquired during initial L2 exposure were sorted as a function of performance on this task. High-proficiency learners showed a larger N100 effect to open-class content words compared with closed-class function words, whereas low-proficiency learners did not show a significant N100 difference between open- and closed-class words. In contrast, amplitude of the N400 word category effect correlated with learners' L2 comprehension, rather than predicting syntactic learning. Taken together, these results indicate that learners who spontaneously direct greater attention to open- rather than closed-class words when processing L2 input show better syntactic learning, suggesting a link between selective attention to open-class content words and acquisition of basic morphosyntactic rules. These findings highlight the importance of selective attention mechanisms for L2 acquisition.

  11. Detection of explosive cough events in audio recordings by internal sound analysis.

    Science.gov (United States)

    Rocha, B M; Mendes, L; Couceiro, R; Henriques, J; Carvalho, P; Paiva, R P

    2017-07-01

    We present a new method for the discrimination of explosive cough events, which is based on a combination of spectral content descriptors and pitch-related features. After the removal of near-silent segments, a vector of event boundaries is obtained and a proposed set of 9 features is extracted for each event. Two data sets, recorded using electronic stethoscopes and comprising a total of 46 healthy subjects and 13 patients, were employed to evaluate the method. The proposed feature set is compared to three other sets of descriptors: a baseline, a combination of both sets, and an automatic selection of the best 10 features from both sets. The combined feature set yields good results on the cross-validated database, attaining a sensitivity of 92.3±2.3% and a specificity of 84.7±3.3%. Besides, this feature set seems to generalize well when it is trained on a small data set of patients, with a variety of respiratory and cardiovascular diseases, and tested on a bigger data set of mostly healthy subjects: a sensitivity of 93.4% and a specificity of 83.4% are achieved in those conditions. These results demonstrate that complementing the proposed feature set with a baseline set is a promising approach.

  12. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  13. Learning a Health Knowledge Graph from Electronic Medical Records.

    Science.gov (United States)

    Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David

    2017-07-20

    Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).

  14. Sound as Affective Design Feature in Multimedia Learning--Benefits and Drawbacks from a Cognitive Load Theory Perspective

    Science.gov (United States)

    Königschulte, Anke

    2015-01-01

    The study presented in this paper investigates the potential effects of including non-speech audio such as sound effects into multimedia-based instruction taking into account Sweller's cognitive load theory (Sweller, 2005) and applied frameworks such as the cognitive theory of multimedia learning (Mayer, 2005) and the cognitive affective theory of…

  15. Copyright and Related Issues Relevant to Digital Preservation and Dissemination of Unpublished Pre-1972 Sound Recordings by Libraries and Archives. CLIR Publication No. 144

    Science.gov (United States)

    Besek, June M.

    2009-01-01

    This report addresses the question of what libraries and archives are legally empowered to do to preserve and make accessible for research their holdings of unpublished pre-1972 sound recordings. The report's author, June M. Besek, is executive director of the Kernochan Center for Law, Media and the Arts at Columbia Law School. Unpublished sound…

  16. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    Science.gov (United States)

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.

  17. Auditory learning through active engagement with sound: Biological impact of community music lessons in at-risk children

    Directory of Open Access Journals (Sweden)

    Nina eKraus

    2014-11-01

    Full Text Available The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements in the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1,000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for one year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to an instrumental training class. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. These findings speak to the potential of active engagement with sound (i.e., music-making to engender experience-dependent neuroplasticity during trand may inform the development of strategies for auditory

  18. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  19. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  20. Learning for Everyday Life: Students' standpoints on loud sounds and use of hearing protectors before and after a teaching-learning intervention

    Science.gov (United States)

    West, Eva

    2012-11-01

    Researchers have highlighted the increasing problem of loud sounds among young people in leisure-time environments, recently even emphasizing portable music players, because of the risk of suffering from hearing impairments such as tinnitus. However, there is a lack of studies investigating compulsory-school students' standpoints and explanations in connection with teaching interventions integrating school subject content with auditory health. In addition, there are few health-related studies in the international science education literature. This paper explores students' standpoints on loud sounds including the use of hearing-protection devices in connection with a teaching intervention based on a teaching-learning sequence about sound, hearing and auditory health. Questionnaire data from 199 students, in grades 4, 7 and 8 (aged 10-14), from pre-, post- and delayed post-tests were analysed. Additionally, information on their experiences of tinnitus as well as their listening habits regarding portable music players was collected. The results show that more students make healthier choices in questions of loud sounds after the intervention, and especially among the older ones this result remains or is further improved one year later. There are also signs of positive behavioural change in relation to loud sounds. Significant gender differences are found; generally, the girls show more healthy standpoints and expressions than boys do. If this can be considered to be an outcome of students' improved and integrated knowledge about sound, hearing and health, then this emphasizes the importance of integrating health issues into regular school science.

  1. NOAA Climate Data Record of Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU-A) Mean Layer Temperature, Version 3.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains three channel-based, monthly gridded atmospheric layer temperature Climate Data Records generated by merging nine MSU NOAA polar orbiting...

  2. Chronic early postnatal scream sound stress induces learning deficits and NMDA receptor changes in the hippocampus of adult mice.

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Wang, Jue; Song, Tusheng; Huang, Chen

    2016-04-13

    Chronic scream sounds during adulthood affect spatial learning and memory, both of which are sexually dimorphic. The long-term effects of chronic early postnatal scream sound stress (SSS) during postnatal days 1-21 (P1-P21) on spatial learning and memory in adult mice as well as whether or not these effects are sexually dimorphic are unknown. Therefore, the present study examines the performance of adult male and female mice in the Morris water maze following exposure to chronic early postnatal SSS. Hippocampal NR2A and NR2B levels as well as NR2A/NR2B subunit ratios were tested using immunohistochemistry. In the Morris water maze, stress males showed greater impairment in spatial learning and memory than background males; by contrast, stress and background females performed equally well. NR2B levels in CA1 and CA3 were upregulated, whereas NR2A/NR2B ratios were downregulated in stressed males, but not in females. These data suggest that chronic early postnatal SSS influences spatial learning and memory ability, levels of hippocampal NR2B, and NR2A/NR2B ratios in adult males. Moreover, chronic early stress-induced alterations exert long-lasting effects and appear to affect performance in a sex-specific manner.

  3. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  4. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  5. Federated learning of predictive models from federated Electronic Health Records.

    Science.gov (United States)

    Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei

    2018-04-01

    In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed

  6. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  7. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    Science.gov (United States)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did

  8. Design of a Multi-Week Sound and Motion Recording and Telemetry (SMRT) Tag for Behavioral Studies on Whales

    Science.gov (United States)

    2015-09-30

    Computers to develop a medium-term attachment method for cetaceans involving a set of short barbed darts that anchor in the dermis. In the current project...configuration required for the SMRT tag. Ambient noise monitoring Work is advancing on a paper describing an in situ processing method, developed...in a previous ONR project, for estimating the ambient noise from tag sound samples. In this paper we show that a modified form of spectral analysis

  9. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  10. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986

  11. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Directory of Open Access Journals (Sweden)

    Wendy Doubé

    2018-04-01

    Full Text Available Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  12. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    NARCIS (Netherlands)

    Fraga González, G.; Žarić, G.; Tijms, J.; Bonte, M.; van der Molen, M.W.

    We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS) associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for

  13. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  14. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  15. A Survey of Kurdish Students’ Sound Segment & Syllabic Pattern Errors in the Course of Learning EFL

    Directory of Open Access Journals (Sweden)

    Jahangir Mohammadi

    2014-06-01

    Full Text Available This paper is devoted to finding adequate answers to the following queries: (A what are the segmental and syllabic pattern errors made by Kurdish students in their pronunciation? (B Can the problematic areas in pronunciation be predicted by a systematic comparison of the sound systems of both native and target languages? (C Can there be any consistency between the predictions and the results of the error analysis experiments in the same field? To reach the goals of the study the following steps were taken; 1.The sound systems and syllabic patterns of both languages Kurdish and English were clearly described on the basis of place and manner of articulation and the combinatory power of clusters. 2. To carry out a contrastive analysis, the sound segments (vowels, consonants and diphthongs and the syllabic patterns of both languages were compared in order to surface the similarities and differences.  3. The syllabic patterns and sound segments in English that had no counterparts in Kurdish were detected and considered as problematic areas in pronunciation. 4. To countercheck the acquired predictions, an experiment was carried out with 50 male and female pre-university students. Subjects were given some passages to read. The readability index of these passages ranged from 8.775 to 10.432 which are quite suitable in comparison to the readability index of pre-university texts ranging from 8.675 to 10.475. All samples of bound production were transcribed in IPA and the syllabic patterns were shown by symbols ‘V’ and ‘C’ indicating vowels and consonants respectively. An error analysis of the acquired data proved that English sound segments and syllabic patterns with no counterparts in Kurdish resulted in pronunciation errors.

  16. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  17. Seeing How It Sounds: Observation, Imitation, and Improved Learning in Piano Playing

    Science.gov (United States)

    Simones, Lilian; Rodger, Matthew; Schroeder, Franziska

    2017-01-01

    This study centers upon a piano learning and teaching environment in which beginners and intermediate piano students (N = 48) learning to perform a specific type of staccato were submitted to three different (group-exclusive) teaching conditions: "audio-only" demonstration of the musical task; observation of the teacher's action…

  18. Classification of pulmonary pathology from breath sounds using the wavelet packet transform and an extreme learning machine.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S

    2017-06-08

    Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.

  19. The strategic use of lecture recordings to facilitate an active and self-directed learning approach.

    Science.gov (United States)

    Topale, Luminica

    2016-08-12

    New learning technologies have the capacity to dramatically impact how students go about learning and to facilitate an active, self-directed learning approach. In U. S. medical education, students encounter a large volume of content, which must be mastered at an accelerated pace. The added pressure to excel on the USMLE Step 1 licensing exam and competition for residency placements, require that students adopt an informed approach to the use of learning technologies so as to enhance rather than to detract from the learning process. The primary aim of this study was to gain a better understanding of how students were using recorded lectures in their learning and how their study habits have been influenced by the technology. Survey research was undertaken using a convenience sample. Students were asked to voluntarily participate in an electronic survey comprised of 27 closed ended, multiple choice questions, and one open ended item. The survey was designed to explore students' perceptions of how recorded lectures affected their choices regarding class participation and impacted their learning and to gain an understanding of how recorded lectures facilitated a strategic, active learning process. Findings revealed that recorded lectures had little influence on students' choices to participate, and that the perceived benefits of integrating recorded lectures into study practices were related to their facilitation of and impact on efficient, active, and self-directed learning. This study was a useful investigation into how the availability of lecture capture technology influenced medical students' study behaviors and how students were making valuable use of the technology as an active learning tool.

  20. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  1. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  2. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  3. Predicting healthcare trajectories from medical records: A deep learning approach.

    Science.gov (United States)

    Pham, Trang; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha

    2017-05-01

    Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A machine learning approach to create blocking criteria for record linkage.

    Science.gov (United States)

    Giang, Phan H

    2015-03-01

    Record linkage, a part of data cleaning, is recognized as one of most expensive steps in data warehousing. Most record linkage (RL) systems employ a strategy of using blocking filters to reduce the number of pairs to be matched. A blocking filter consists of a number of blocking criteria. Until recently, blocking criteria are selected manually by domain experts. This paper proposes a new method to automatically learn efficient blocking criteria for record linkage. Our method addresses the lack of sufficient labeled data for training. Unlike previous works, we do not consider a blocking filter in isolation but in the context of an accompanying matcher which is employed after the blocking filter. We show that given such a matcher, the labels (assigned to record pairs) that are relevant for learning are the labels assigned by the matcher (link/nonlink), not the labels assigned objectively (match/unmatch). This conclusion allows us to generate an unlimited amount of labeled data for training. We formulate the problem of learning a blocking filter as a Disjunctive Normal Form (DNF) learning problem and use the Probably Approximately Correct (PAC) learning theory to guide the development of algorithm to search for blocking filters. We test the algorithm on a real patient master file of 2.18 million records. The experimental results show that compared with filters obtained by educated guess, the optimal learned filters have comparable recall but reduce throughput (runtime) by an order-of-magnitude factor.

  5. Landslides and megathrust splay faults captured by the late Holocene sediment record of eastern Prince William Sound, Alaska

    Science.gov (United States)

    Finn, S.P.; Liberty, Lee M.; Haeussler, Peter J.; Pratt, Thomas L.

    2015-01-01

    We present new marine seismic‐reflection profiles and bathymetric maps to characterize Holocene depositional patterns, submarine landslides, and active faults beneath eastern and central Prince William Sound (PWS), Alaska, which is the eastern rupture patch of the 1964 Mw 9.2 earthquake. We show evidence that submarine landslides, many of which are likely earthquake triggered, repeatedly released along the southern margin of Orca Bay in eastern PWS. We document motion on reverse faults during the 1964 Great Alaska earthquake and estimate late Holocene slip rates for these growth faults, which splay from the subduction zone megathrust. Regional bathymetric lineations help define the faults that extend 40–70 km in length, some of which show slip rates as great as 3.75  mm/yr. We infer that faults mapped below eastern PWS connect to faults mapped beneath central PWS and possibly onto the Alaska mainland via an en echelon style of faulting. Moderate (Mw>4) upper‐plate earthquakes since 1964 give rise to the possibility that these faults may rupture independently to potentially generate Mw 7–8 earthquakes, and that these earthquakes could damage local infrastructure from ground shaking. Submarine landslides, regardless of the source of initiation, could generate local tsunamis to produce large run‐ups along nearby shorelines. In a more general sense, the PWS area shows that faults that splay from the underlying plate boundary present proximal, perhaps independent seismic sources within the accretionary prism, creating a broad zone of potential surface rupture that can extend inland 150 km or more from subduction zone trenches.

  6. Digital recording as a teaching and learning method in the skills laboratory.

    Science.gov (United States)

    Strand, Ingebjørg; Gulbrandsen, Lise; Slettebø, Åshild; Nåden, Dagfinn

    2017-09-01

    To obtain information on how nursing students react to, think about and learn from digital recording as a learning and teaching method over time. Based on the teaching and learning philosophy of the university college, we used digital recording as a tool in our daily sessions in skills laboratory. However, most of the studies referred to in the background review had a duration of from only a few hours to a number of days. We found it valuable to design a study with a duration of two academic semesters. A descriptive and interpretative design was used. First-year bachelor-level students at the department of nursing participated in the study. Data collection was carried out by employing an 'online questionnaire'. The students answered five written, open-ended questions after each of three practical skill sessions. Kvale and Brinkmann's three levels of understanding were employed in the analysis. The students reported that digital recording affected factors such as feeling safe, secure and confident and that video recording was essential in learning and training practical skills. The use of cameras proved to be useful, as an expressive tool for peer learning because video recording enhances self-assessment, reflection, sensing, psychomotor performance and discovery learning. Digital recording enhances the student's awareness when acquiring new knowledge because it activates cognitive and emotional learning. The connection between tutoring, feedback and technology was clear. The digital recorder gives students direct and immediate feedback on their performance from the various practical procedures, and may aid in the transition from theory to practice. Students experienced more self-confidence and a feeling of safety in their performances. © 2016 John Wiley & Sons Ltd.

  7. Brainstem auditory evoked potentials with the use of acoustic clicks and complex verbal sounds in young adults with learning disabilities.

    Science.gov (United States)

    Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos

    2013-01-01

    'other learning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic representation of a speech sound and, in particular, the disyllabic word 'baba' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.

  8. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  9. Lessons learned from a health record bank start-up.

    Science.gov (United States)

    Yasnoff, W A; Shortliffe, E H

    2014-01-01

    This article is part of a Focus Theme of METHODS of Information in Medicine on Health Record Banking. In late summer 2010, an organization was formed in greater Phoenix, Arizona (USA), to introduce a health record bank (HRB) in that community. The effort was initiated after market research and was aimed at engaging 200,000 individuals as members in the first year (5% of the population). It was also intended to evaluate a business model that was based on early adoption by consumers and physicians followed by additional revenue streams related to incremental services and secondary uses of clinical data, always with specific permission from individual members, each of whom controlled all access to his or her own data. To report on the details of the HRB experience in Phoenix, to describe the sources of problems that were experienced, and to identify lessons that need to be considered in future HRB ventures. We describe staffing for the HRB effort, the computational platform that was developed, the approach to marketing, the engagement of practicing physicians, and the governance model that was developed to guide the HRB design and implementation. Despite efforts to engage the physician community, limited consumer advertising, and a carefully considered financial strategy, the experiment failed due to insufficient enrollment of individual members. It was discontinued in April 2011. Although the major problem with this HRB project was undercapitalization, we believe this effort demonstrated that basic HRB accounts should be free for members and that physician engagement and participation are key elements in constructing an effective marketing channel. Local community governance is essential for trust, and the included population must be large enough to provide sufficient revenues to sustain the resource in the long term.

  10. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  11. The Origins of Vocal Learning: New Sounds, New Circuits, New Cells

    Science.gov (United States)

    Nottebohm, Fernando; Liu, Wan-Chun

    2010-01-01

    We do not know how vocal learning came to be, but it is such a salient trait in human evolution that many have tried to imagine it. In primates this is difficult because we are the only species known to possess this skill. Songbirds provide a richer and independent set of data. I use comparative data and ask broad questions: How does vocal…

  12. Theory-based Support for Mobile Language Learning: Noticing and Recording

    Directory of Open Access Journals (Sweden)

    Agnes Kukulska-Hulme

    2009-04-01

    Full Text Available This paper considers the issue of 'noticing' in second language acquisition, and argues for the potential of handheld devices to: (i support language learners in noticing and recording noticed features 'on the spot', to help them develop their second language system; (ii help language teachers better understand the specific difficulties of individuals or those from a particular language background; and (iii facilitate data collection by applied linguistics researchers, which can be fed back into educational applications for language learning. We consider: theoretical perspectives drawn from the second language acquisition literature, relating these to the practice of writing language learning diaries; and the potential for learner modelling to facilitate recording and prompting noticing in mobile assisted language learning contexts. We then offer guidelines for developers of mobile language learning solutions to support the development of language awareness in learners.

  13. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  14. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  15. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  16. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  17. Effects of lips and hands on auditory learning of second-language speech sounds.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  18. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    Science.gov (United States)

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  19. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  20. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  1. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  2. The Impact of the 1989 Exxon Valdez Oil Spill on Phytoplankton as Evidenced Through the Sedimentary Dinoflagellate Cyst Records in Prince William Sound (Alaska, USA).

    Science.gov (United States)

    Genest, M.; Pospelova, V.; Williams, J. R.; Dellapenna, T.; Mertens, K.; Kuehl, S. A.

    2016-12-01

    Large volumes of crude oil are extracted from marine environments and transported via the sea, putting coastal communities at a greater risk of oils spills. It is therefore crucial for these communities to properly assess the risk. The first step is to understand the effects of such events on the environment, which is limited by the lack of research on the impact of oil spills on phytoplankton. This first-of-its-kind research aims to identify how one of the major groups of phytoplankton, dinoflagellates, have been affected by the 1989 Exxon Valdez oil spill in Prince William Sound (PWS), Alaska. To do this, sedimentary records of dinoflagellate cysts, produced during dinoflagellate reproduction and preserved in the sediment, were analyzed. Two sediment cores were collected from PWS in 2012. The sediments are mainly composed of silt with a small fraction of clay. Both well-dated with 210Pb and 137Cs, the cores have high sedimentation rates, allowing for an annual to biannual resolution. Core 10 has a sedimentation rate of 1.1 cm yr-1 and provides continuous record since 1957, while Core 12 has a sedimentation rate of 1.3 cm yr-1 and spans from 1934. The cores were subsampled every centimeter for a total of 110 samples. Samples were treated using a standard palynological processing technique to extract dinoflagellate cysts and 300 cysts were counted per sample. In both cores, cysts were abundant, diverse and well preserved with the average cyst assemblage being characterized by an equal number of cysts produced by autotrophic and heterotrophic dinoflagellates. Of the 40 dinoflagellate cyst taxa, the most abundant are: Operculodinium centrocarpum and Brigantedinium spp. Other common species are: Spiniferites ramosus, cysts of Pentapharsodinium dalei, Echinidinium delicatum, E. zonneveldiae, E. transparantum, Islandinium minutum, and a thin pale brown Brigantedinium type. Changes in the sedimentary sequence of dinoflagellate cysts were analyzed by determining cyst

  3. Integration of strategy experiential learning in e-module of electronic records management

    Directory of Open Access Journals (Sweden)

    S. Sutirman

    2018-01-01

    Full Text Available This study aims to determine the effectiveness of e-module of electronic records management integrated with experiential learning strategies to improve student achievement in the domain of cognitive, psychomotor, and affective. This study is a research and development. Model research and development used is Web-Based Instructional Design (WBID developed by Davidson-Shivers and Rasmussen. The steps of research and development carried out by analysis, evaluation planning, concurrent design, implementation, and a summative evaluation. The approach used in this study consisted of qualitative and quantitative approaches. Collecting data used the Delphi technique, observation, documentation studies and tests. Research data analysis used qualitative analysis and quantitative analysis. Testing the effectiveness of the product used a quasi-experimental research design pretest-posttest non-equivalent control group. The results showed that the e-module of electronic records management integrated with experiential learning strategies can improve student achievement in the domain of cognitive, psychomotor, and affective.

  4. Self-experiential learning – a research study into music therapy students’ perspective. Sounds that resonate with the personality

    DEFF Research Database (Denmark)

    Lindvang, Charlotte

    In this paper I presented a part of my PhD-study in music therapy: “A Field of Resonant Learning. Self-experiential training and the development of music therapeutic competencies: a mixed methods investigation of student experiences and professionals’ evaluation of their own competencies...... by investigating how Danish professional music therapists evaluate the impact of their earlier self-experiential training on their current clinical competencies. In this paper I focused on presenting the qualitative part of my research which addresses the first part of the purpose about the students’ experiences....... Semi-structured qualitative interviews and qualitative music analyses were conducted, using a hermeneutic approach. The nine music therapy students who participated were enrolled in the fifth year of their Master’s degree training programme. They were asked to bring a recording of an improvisation...

  5. Home recording for musicians for dummies

    CERN Document Server

    Strong, Jeff

    2008-01-01

    Invaluable advice that will be music to your ears! Are you thinking of getting started in home recording? Do you want to know the latest home recording technologies? Home Recording For Musicians For Dummies will get you recording music at home in no time. It shows you how to set up a home studio, record and edit your music, master it, and even distribute your songs. With this guide, you?ll learn how to compare studio-in-a-box, computer-based, and stand-alone recording systems and choose what you need. You?ll gain the skills to manage your sound, take full advantage of MIDI, m

  6. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1–59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case–control study

    Science.gov (United States)

    Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid

  7. Tech-Assisted Language Learning Tasks in an EFL Setting: Use of Hand phone Recording Feature

    Directory of Open Access Journals (Sweden)

    Alireza Shakarami

    2014-09-01

    Full Text Available Technology with its speedy great leaps forward has undeniable impact on every aspect of our life in the new millennium. It has supplied us with different affordances almost daily or more precisely in a matter of hours. Technology and Computer seems to be a break through as for their roles in the Twenty-First century educational system. Examples are numerous, among which CALL, CMC, and Virtual learning spaces come to mind instantly. Amongst the newly developed gadgets of today are the sophisticated smart Hand phones which are far more ahead of a communication tool once designed for. Development of Hand phone as a wide-spread multi-tasking gadget has urged researchers to investigate its effect on every aspect of learning process including language learning. This study attempts to explore the effects of using cell phone audio recording feature, by Iranian EFL learners, on the development of their speaking skills. Thirty-five sophomore students were enrolled in a pre-posttest designed study. Data on their English speaking experience using audio–recording features of their Hand phones were collected. At the end of the semester, the performance of both groups, treatment and control, were observed, evaluated, and analyzed; thereafter procured qualitatively at the next phase. The quantitative outcome lent support to integrating Hand phones as part of the language learning curriculum. Keywords:

  8. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  9. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  10. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  11. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    Science.gov (United States)

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  12. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  13. Recording single neurons' action potentials from freely moving pigeons across three stages of learning.

    Science.gov (United States)

    Starosta, Sarah; Stüttgen, Maik C; Güntürkün, Onur

    2014-06-02

    While the subject of learning has attracted immense interest from both behavioral and neural scientists, only relatively few investigators have observed single-neuron activity while animals are acquiring an operantly conditioned response, or when that response is extinguished. But even in these cases, observation periods usually encompass only a single stage of learning, i.e. acquisition or extinction, but not both (exceptions include protocols employing reversal learning; see Bingman et al.(1) for an example). However, acquisition and extinction entail different learning mechanisms and are therefore expected to be accompanied by different types and/or loci of neural plasticity. Accordingly, we developed a behavioral paradigm which institutes three stages of learning in a single behavioral session and which is well suited for the simultaneous recording of single neurons' action potentials. Animals are trained on a single-interval forced choice task which requires mapping each of two possible choice responses to the presentation of different novel visual stimuli (acquisition). After having reached a predefined performance criterion, one of the two choice responses is no longer reinforced (extinction). Following a certain decrement in performance level, correct responses are reinforced again (reacquisition). By using a new set of stimuli in every session, animals can undergo the acquisition-extinction-reacquisition process repeatedly. Because all three stages of learning occur in a single behavioral session, the paradigm is ideal for the simultaneous observation of the spiking output of multiple single neurons. We use pigeons as model systems, but the task can easily be adapted to any other species capable of conditioned discrimination learning.

  14. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    scientists with that of numerical mathematicians studying sonification, psychologists, linguists, bioacousticians, and musicians to illuminate the structure of sound from different angles. Each of these disciplines deals with the use of sound to carry a different sort of information, under different requirements and constraints. By combining their insights, we can learn to understand of the structure of sound in general.

  15. From stereoscopic recording to virtual reality headsets: Designing a new way to learn surgery.

    Science.gov (United States)

    Ros, M; Trives, J-V; Lonjon, N

    2017-03-01

    To improve surgical practice, there are several different approaches to simulation. Due to wearable technologies, recording 3D movies is now easy. The development of a virtual reality headset allows imagining a different way of watching these videos: using dedicated software to increase interactivity in a 3D immersive experience. The objective was to record 3D movies via a main surgeon's perspective, to watch files using virtual reality headsets and to validate pedagogic interest. Surgical procedures were recorded using a system combining two side-by-side cameras placed on a helmet. We added two LEDs just below the cameras to enhance luminosity. Two files were obtained in mp4 format and edited using dedicated software to create 3D movies. Files obtained were then played using a virtual reality headset. Surgeons who tried the immersive experience completed a questionnaire to evaluate the interest of this procedure for surgical learning. Twenty surgical procedures were recorded. The movies capture a scene which is extended 180° horizontally and 90° vertically. The immersive experience created by the device conveys a genuine feeling of being in the operating room and seeing the procedure first-hand through the eyes of the main surgeon. All surgeons indicated that they believe in pedagogical interest of this method. We succeeded in recording the main surgeon's point of view in 3D and watch it on a virtual reality headset. This new approach enhances the understanding of surgery; most of the surgeons appreciated its pedagogic value. This method could be an effective learning tool in the future. Copyright © 2016. Published by Elsevier Masson SAS.

  16. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  17. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  18. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  19. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  20. Identifying seizure onset zone from electrocorticographic recordings: A machine learning approach based on phase locking value.

    Science.gov (United States)

    Elahian, Bahareh; Yeasin, Mohammed; Mudigoudar, Basanagoud; Wheless, James W; Babajani-Feremi, Abbas

    2017-10-01

    Using a novel technique based on phase locking value (PLV), we investigated the potential for features extracted from electrocorticographic (ECoG) recordings to serve as biomarkers to identify the seizure onset zone (SOZ). We computed the PLV between the phase of the amplitude of high gamma activity (80-150Hz) and the phase of lower frequency rhythms (4-30Hz) from ECoG recordings obtained from 10 patients with epilepsy (21 seizures). We extracted five features from the PLV and used a machine learning approach based on logistic regression to build a model that classifies electrodes as SOZ or non-SOZ. More than 96% of electrodes identified as the SOZ by our algorithm were within the resected area in six seizure-free patients. In four non-seizure-free patients, more than 31% of the identified SOZ electrodes by our algorithm were outside the resected area. In addition, we observed that the seizure outcome in non-seizure-free patients correlated with the number of non-resected SOZ electrodes identified by our algorithm. This machine learning approach, based on features extracted from the PLV, effectively identified electrodes within the SOZ. The approach has the potential to assist clinicians in surgical decision-making when pre-surgical intracranial recordings are utilized. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  1. Clinical Relation Extraction Toward Drug Safety Surveillance Using Electronic Health Record Narratives: Classical Learning Versus Deep Learning.

    Science.gov (United States)

    Munkhdalai, Tsendsuren; Liu, Feifan; Yu, Hong

    2018-04-25

    Medication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data. To unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations. We have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types. Our results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%. It shows that

  2. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  3. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  4. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  5. The Early Years: Becoming Attuned to Sound

    Science.gov (United States)

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  6. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  7. Machine Learning Methods to Extract Documentation of Breast Cancer Symptoms From Electronic Health Records.

    Science.gov (United States)

    Forsyth, Alexander W; Barzilay, Regina; Hughes, Kevin S; Lui, Dickson; Lorenz, Karl A; Enzinger, Andrea; Tulsky, James A; Lindvall, Charlotta

    2018-02-27

    Clinicians document cancer patients' symptoms in free-text format within electronic health record visit notes. Although symptoms are critically important to quality of life and often herald clinical status changes, computational methods to assess the trajectory of symptoms over time are woefully underdeveloped. To create machine learning algorithms capable of extracting patient-reported symptoms from free-text electronic health record notes. The data set included 103,564 sentences obtained from the electronic clinical notes of 2695 breast cancer patients receiving paclitaxel-containing chemotherapy at two academic cancer centers between May 1996 and May 2015. We manually annotated 10,000 sentences and trained a conditional random field model to predict words indicating an active symptom (positive label), absence of a symptom (negative label), or no symptom at all (neutral label). Sentences labeled by human coder were divided into training, validation, and test data sets. Final model performance was determined on 20% test data unused in model development or tuning. The final model achieved precision of 0.82, 0.86, and 0.99 and recall of 0.56, 0.69, and 1.00 for positive, negative, and neutral symptom labels, respectively. The most common positive symptoms were pain, fatigue, and nausea. Machine-based labeling of 103,564 sentences took two minutes. We demonstrate the potential of machine learning to gather, track, and analyze symptoms experienced by cancer patients during chemotherapy. Although our initial model requires further optimization to improve the performance, further model building may yield machine learning methods suitable to be deployed in routine clinical care, quality improvement, and research applications. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  8. Month of Conception and Learning Disabilities: A Record-Linkage Study of 801,592 Children.

    Science.gov (United States)

    Mackay, Daniel F; Smith, Gordon C S; Cooper, Sally-Ann; Wood, Rachael; King, Albert; Clark, David N; Pell, Jill P

    2016-10-01

    Learning disabilities have profound, long-lasting health sequelae. Affected children born over the course of 1 year in the United States of America generated an estimated lifetime cost of $51.2 billion. Results from some studies have suggested that autistic spectrum disorder may vary by season of birth, but there have been few studies in which investigators examined whether this is also true of other causes of learning disabilities. We undertook Scotland-wide record linkage of education (annual pupil census) and maternity (Scottish Morbidity Record 02) databases for 801,592 singleton children attending Scottish schools in 2006-2011. We modeled monthly rates using principal sine and cosine transformations of the month number and demonstrated cyclicity in the percentage of children with special educational needs. Rates were highest among children conceived in the first quarter of the year (January-March) and lowest among those conceived in the third (July-September) (8.9% vs 7.6%; P disabilities, and learning difficulties (e.g., dyslexia) and were absent for sensory or motor/physical impairments and mental, physical, or communication problems. Seasonality accounted for 11.4% (95% confidence interval: 9.0, 13.7) of all cases. Some biologically plausible causes of this variation, such as infection and maternal vitamin D levels, are potentially amendable to intervention. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  10. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  11. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    Directory of Open Access Journals (Sweden)

    Gorka Fraga González

    2017-01-01

    Full Text Available We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating L-SS associations. Then we review brain potential studies, including our own, that yielded two markers associated with reading fluency. Here we show that the marker related to visual specialization (N170 predicts word and pseudoword reading fluency in children who received additional practice in the processing of morphological word structure. Conversely, L-SS integration (indexed by mismatch negativity (MMN may only remain important when direct orthography to semantic conversion is not possible, such as in pseudoword reading. In addition, the correlation between these two markers supports the notion that multisensory integration facilitates visual specialization. Finally, we review the role of implicit learning and executive functions in audiovisual learning in dyslexia. Implications for remedial research are discussed and suggestions for future studies are presented.

  12. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    Science.gov (United States)

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association

  13. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  14. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  15. Integration of a mobile-integrated therapy with electronic health records: lessons learned.

    Science.gov (United States)

    Peeples, Malinda M; Iyer, Anand K; Cohen, Joshua L

    2013-05-01

    Responses to the chronic disease epidemic have predominantly been standardized in their approach to date. Barriers to better health outcomes remain, and effective management requires patient-specific data and disease state knowledge be presented in methods that foster clinical decision-making and patient self-management. Mobile technology provides a new platform for data collection and patient-provider communication. The mobile device represents a personalized platform that is available to the patient on a 24/7 basis. Mobile-integrated therapy (MIT) is the convergence of mobile technology, clinical and behavioral science, and scientifically validated clinical outcomes. In this article, we highlight the lessons learned from functional integration of a Food and Drug Administration-cleared type 2 diabetes MIT into the electronic health record (EHR) of a multiphysician practice within a large, urban, academic medical center. In-depth interviews were conducted with integration stakeholder groups: mobile and EHR software and information technology teams, clinical end users, project managers, and business analysts. Interviews were summarized and categorized into lessons learned using the Architecture for Integrated Mobility® framework. Findings from the diverse stakeholder group of a MIT-EHR integration project indicate that user workflow, software system persistence, environment configuration, device connectivity and security, organizational processes, and data exchange heuristics are key issues that must be addressed. Mobile-integrated therapy that integrates patient self-management data with medical record data provides the opportunity to understand the potential benefits of bidirectional data sharing and reporting that are most valuable in advancing better health and better care in a cost-effective way that is scalable for all chronic diseases. © 2013 Diabetes Technology Society.

  16. A machine learning-based framework to identify type 2 diabetes through electronic health records.

    Science.gov (United States)

    Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You

    2017-01-01

    To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature

  17. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  18. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  19. Automatic bad channel detection in intracranial electroencephalographic recordings using ensemble machine learning.

    Science.gov (United States)

    Tuyisenge, Viateur; Trebaul, Lena; Bhattacharjee, Manik; Chanteloup-Forêt, Blandine; Saubat-Guigui, Carole; Mîndruţă, Ioana; Rheims, Sylvain; Maillard, Louis; Kahane, Philippe; Taussig, Delphine; David, Olivier

    2018-03-01

    Intracranial electroencephalographic (iEEG) recordings contain "bad channels", which show non-neuronal signals. Here, we developed a new method that automatically detects iEEG bad channels using machine learning of seven signal features. The features quantified signals' variance, spatial-temporal correlation and nonlinear properties. Because the number of bad channels is usually much lower than the number of good channels, we implemented an ensemble bagging classifier known to be optimal in terms of stability and predictive accuracy for datasets with imbalanced class distributions. This method was applied on stereo-electroencephalographic (SEEG) signals recording during low frequency stimulations performed in 206 patients from 5 clinical centers. We found that the classification accuracy was extremely good: It increased with the number of subjects used to train the classifier and reached a plateau at 99.77% for 110 subjects. The classification performance was thus not impacted by the multicentric nature of data. The proposed method to automatically detect bad channels demonstrated convincing results and can be envisaged to be used on larger datasets for automatic quality control of iEEG data. This is the first method proposed to classify bad channels in iEEG and should allow to improve the data selection when reviewing iEEG signals. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  20. An Arduino project to record ground motion and to learn on earthquake hazard at high school

    Science.gov (United States)

    Saraò, Angela; Barnaba, Carla; Clocchiatti, Marco; Zuliani, David

    2015-04-01

    Through a multidisciplinary work that integrates Technology education with Earth Sciences, we implemented an educational program to raise the students' awareness of seismic hazard and to disseminate good practices of earthquake safety. Using free software and low-cost open hardware, the students of a senior class of the high school Liceo Paschini in Tolmezzo (NE Italy) implemented a seismograph using the Arduino open-source electronics platform and the ADXL345 sensors to emulate a low cost seismometer (e.g. O-NAVI sensor of the Quake-Catcher Network, http://qcn.stanford.edu). To accomplish their task the students were addressed to use the web resources for technical support and troubleshooting. Shell scripts, running on local computers under Linux OS, controlled the process of recording and display data. The main part of the experiment was documented using the DokuWiki style. Some propaedeutic lessons in computer sciences and electronics were needed to build up the necessary skills of the students and to fill in the gap of their background knowledge. In addition lectures by seismologists and laboratory activity allowed the class to exploit different aspects of the physics of the earthquake and particularly of the seismic waves, and to become familiar with the topics of seismic hazard through an inquiry-based learning. The Arduino seismograph achieved can be used for educational purposes and it can display tremors on the local network of the school. For sure it can record the ground motion due to a seismic event that can occur in the area, but further improvements are necessary for a quantitative analysis of the recorded signals.

  1. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  2. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  3. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  4. Little Sounds

    Directory of Open Access Journals (Sweden)

    Baker M. Bani-Khair

    2017-10-01

    Full Text Available The Spider and the Fly   You little spider, To death you aspire... Or seeking a web wider, To death all walking, No escape you all fighters… Weak and fragile in shape and might, Whatever you see in the horizon, That is destiny whatever sight. And tomorrow the spring comes, And the flowers bloom, And the grasshopper leaps high, And the frogs happily cry, And the flies smile nearby, To that end, The spider has a plot, To catch the flies by his net, A mosquito has fallen down in his net, Begging him to set her free, Out of that prison, To her freedom she aspires, Begging...Imploring...crying,  That is all what she requires, But the spider vows never let her free, His power he admires, Turning blind to light, And with his teeth he shall bite, Leaving her in desperate might, Unable to move from site to site, Tied up with strings in white, Wrapped up like a dead man, Waiting for his grave at night,   The mosquito says, Oh little spider, A stronger you are than me in power, But listen to my words before death hour, Today is mine and tomorrow is yours, No escape from death... Whatever the color of your flower…     Little sounds The Ant The ant is a little creature with a ferocious soul, Looking and looking for more and more, You can simply crush it like dead mold, Or you can simply leave it alone, I wonder how strong and strong they are! Working day and night in a small hole, Their motto is work or whatever you call… A big boon they have and joy in fall, Because they found what they store, A lesson to learn and memorize all in all, Work is something that you should not ignore!   The butterfly: I’m the butterfly Beautiful like a blue clear sky, Or sometimes look like snow, Different in colors, shapes and might, But something to know that we always die, So fragile, weak and thin, Lighter than a glimpse and delicate as light, Something to know for sure… Whatever you have in life and all these fields, You are not happier than a butterfly

  5. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  6. Sound Science

    Science.gov (United States)

    Sickel, Aaron J.; Lee, Michele H.; Pareja, Enrique M.

    2010-01-01

    How can a teacher simultaneously teach science concepts through inquiry while helping students learn about the nature of science? After pondering this question in their own teaching, the authors developed a 5E learning cycle lesson (Bybee et al. 2006) that concurrently embeds opportunities for fourth-grade students to (a) learn a science concept,…

  7. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  8. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    Science.gov (United States)

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Sound Visualisation

    OpenAIRE

    Dolenc, Peter

    2013-01-01

    This thesis contains a description of a construction of subwoofer case that has an extra functionality of being able to produce special visual effects and display visualizations that match the currently playing sound. For this reason, multiple lighting elements made out of LED (Light Emitting Diode) diodes were installed onto the subwoofer case. The lighting elements are controlled by dedicated software that was also developed. The software runs on STM32F4-Discovery evaluation board inside a ...

  10. Presentations and recorded keynotes of the First European Workshop on Latent Semantic Analysis in Technology Enhanced Learning

    NARCIS (Netherlands)

    Several

    2007-01-01

    Presentations and recorded keynotes at the 1st European Workshop on Latent Semantic Analysis in Technology-Enhanced Learning, March, 29-30, 2007. Heerlen, The Netherlands: The Open University of the Netherlands. Please see the conference website for more information:

  11. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  12. Effects of Sound, Vocabulary, and Grammar Learning Aptitude on Adult Second Language Speech Attainment in Foreign Language Classrooms

    Science.gov (United States)

    Saito, Kazuya

    2017-01-01

    This study examines the relationship between different types of language learning aptitude (measured via the LLAMA test) and adult second language (L2) learners' attainment in speech production in English-as-a-foreign-language (EFL) classrooms. Picture descriptions elicited from 50 Japanese EFL learners from varied proficiency levels were analyzed…

  13. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  14. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  15. The effects of prenatal sound stress on the spatial learning and memory of rat's male offspring

    Directory of Open Access Journals (Sweden)

    Barzegar M

    2011-01-01

    Full Text Available "n 800x600 Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Background: Numerous evidences indicate that various environmental stresses during pregnancy affect physiological behavior of the offspring. This experimental study was designed to investigate the effect of noise stress during prenatal period of rats on spatial learning and memory and plasma corticostrone level in postnatal life."n"nMethods: Three groups of pregnant rats were given daily noise stress with durations of two and/ or four hours in last week of pregnancy period. The fourth group was left unstressed. The male offspring from the unstressed and different stressed groups were assigned as controls and stressed groups. The animals were introduced to a spatial task in Morris water maze 4 trials/day for five consecutive days. The probe test was performed on the 5th day of the experiment. The delay in findings and the distance passed to locate the target platform were assessed as the spatial learning. "n"nResults: Our results showed that prenatal exposure to noise stress for two and/ or four hours a day, leads to impaired acquisition of spatial learning in the postnatal animals. The plasma level of corticostrone in the two stressed groups of rats markedly matched with their behavioral function. Prenatal exposure to 1- hour noise stress revealed no effects on the offsprings' behavior and plasma corticostrone level."n"nConclusion: Based on our study results, it seems that applied range of stress which is executed through the noise stress could increase the plasma corticostrone level and

  16. Monitoring Student Immunization, Screening, and Training Records for Clinical Compliance: An Innovative Use of the Institutional Learning Management System.

    Science.gov (United States)

    Elting, Julie Kientz

    2017-12-13

    Clinical compliance for nursing students is a complex process mandating them to meet facility employee occupational health requirements for immunization, screening, and training prior to patient contact. Nursing programs monitor clinical compliance with in-house management of student records, either paper or electronic, or by contracting with a vendor specializing in online record tracking. Regardless of method, the nursing program remains fully accountable for student preparation and bears the consequences of errors. This article describes how the institution's own learning management system can be used as an accurate, cost-neutral, user-friendly, and Federal Educational Rights Protection Act-compliant clinical compliance system.

  17. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    as knowledge based on reflexive practices. I chose ‘health promotion’ as the field for my research as it utilises knowledge produced in several research disciplines, among these both quantitative and qualitative. I mapped out the institutions, actors, events, and documents that constituted the field of health...... of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... result of a rigorous and standardized research method. However, this anthropological analysis shows that evidence and evidence-based is a hegemonic ‘way of knowing’ that sometimes transposes everyday reasoning into an epistemological form. However, the empirical material shows a variety of understandings...

  18. The sound and the fury--bees hiss when expecting danger.

    Directory of Open Access Journals (Sweden)

    Henja-Niniane Wehmann

    Full Text Available Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  19. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Science.gov (United States)

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  20. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Directory of Open Access Journals (Sweden)

    Wiktor eMlynarski

    2014-03-01

    Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  1. The Sound and the Fury—Bees Hiss when Expecting Danger

    Science.gov (United States)

    Galizia, C. Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees’ sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees’ hissing remain to be investigated. PMID:25747702

  2. The sound and the fury--bees hiss when expecting danger.

    Science.gov (United States)

    Wehmann, Henja-Niniane; Gustav, David; Kirkerud, Nicholas H; Galizia, C Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  3. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  4. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  5. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  6. Pre-attentive sensitivity to vowel duration reveals native phonology and predicts learning of second-language sounds.

    Science.gov (United States)

    Chládková, Kateřina; Escudero, Paola; Lipski, Silvia C

    2013-09-01

    In some languages (e.g. Czech), changes in vowel duration affect word meaning, while in others (e.g. Spanish) they do not. Yet for other languages (e.g. Dutch), the linguistic role of vowel duration remains unclear. To reveal whether Dutch represents vowel length in its phonology, we compared auditory pre-attentive duration processing in native and non-native vowels across Dutch, Czech, and Spanish. Dutch duration sensitivity patterned with Czech but was larger than Spanish in the native vowel, while it was smaller than Czech and Spanish in the non-native vowel. An interpretation of these findings suggests that in Dutch, duration is used phonemically but it might be relevant for the identity of certain native vowels only. Furthermore, the finding that Spanish listeners are more sensitive to duration in non-native than in native vowels indicates that a lack of duration differences in one's native language could be beneficial for second-language learning. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  8. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  9. Searching for learning-dependent changes in the antennal lobe: simultaneous recording of neural activity and aversive olfactory learning in honeybees

    Directory of Open Access Journals (Sweden)

    Edith Roussel

    2010-09-01

    Full Text Available Plasticity in the honeybee brain has been studied using the appetitive olfactory conditioning of the proboscis extension reflex, in which a bee learns the association between an odor and a sucrose reward. In this framework, coupling behavioral measurements of proboscis extension and invasive recordings of neural activity has been difficult because proboscis movements usually introduce brain movements that affect physiological preparations. Here we took advantage of a new conditioning protocol, the aversive olfactory conditioning of the sting extension reflex, which does not generate this problem. We achieved the first simultaneous recordings of conditioned sting extension responses and calcium imaging of antennal lobe activity, thus revealing on-line processing of olfactory information during conditioning trials. Based on behavioral output we distinguished learners and non-learners and analyzed possible learning-dependent changes in antennal lobe activity. We did not find differences between glomerular responses to the CS+ and the CS- in learners. Unexpectedly, we found that during conditioning trials non-learners exhibited a progressive decrease in physiological responses to odors, irrespective of their valence. This effect could neither be attributed to a fitness problem nor to abnormal dye bleaching. We discuss the absence of learning-induced changes in the antennal lobe of learners and the decrease in calcium responses found in non-learners. Further studies will have to extend the search for functional plasticity related to aversive learning to other brain areas and to look on a broader range of temporal scales

  10. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  11. Experimental analysis of the performance of machine learning algorithms in the classification of navigation accident records

    Directory of Open Access Journals (Sweden)

    REIS, M V. S. de A.

    2017-06-01

    Full Text Available This paper aims to evaluate the use of machine learning techniques in a database of marine accidents. We analyzed and evaluated the main causes and types of marine accidents in the Northern Fluminense region. For this, machine learning techniques were used. The study showed that the modeling can be done in a satisfactory manner using different configurations of classification algorithms, varying the activation functions and training parameters. The SMO (Sequential Minimal Optimization algorithm showed the best performance result.

  12. US market. Sound below the line

    Energy Technology Data Exchange (ETDEWEB)

    Iken, Joern

    2012-07-01

    The American Wind Energy Association AWEA is publishing warnings almost daily. The lack of political support is endangering jobs. The year 2011 broke no records, but there was a sound plus in expansion figures. (orig.)

  13. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  14. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  15. Press learning: The potential of podcasting through pause, record, play and stop

    Directory of Open Access Journals (Sweden)

    Tara Brabazon

    2016-09-01

    Full Text Available Podcasts are entering their second decade. However, this article does not present a chronological narrative of this history or focus groups exploring their effectiveness. Instead, this paper probes the enlivening capacity of podcasting when inserted into the much wider discourse of sonic media. My research probes the impact on teaching and learning when cutting away four of our five senses to focus on auditory culture, sonic media, hearing and listening. This research shows the value of ‘blind listening,’ cutting away the eyes and visual literacy, to activate more complex modes of learning.

  16. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  17. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  18. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  19. Language Assessment of Latino English Learning Children: A Records Abstraction Study

    Science.gov (United States)

    Kraemer, Robert; Fabiano-Smith, Leah

    2017-01-01

    The researchers examined how speech-language pathologists (SLPs) in a small northern California school district assessed Spanish speaking English learning (EL) Latino children suspected of language impairments. Specifically we sought to (1) determine whether SLPs adhered to federal, state, and professional guidelines during initial assessments and…

  20. Record, Replay, Reflect: Videotaped Lessons Accelerate Learning for Teachers and Coaches

    Science.gov (United States)

    Knight, Jim; Bradley, Barbara A.; Hock, Michael; Skrtic, Thomas M.; Knight, David; Brasseur-Hock, Irma; Clark, Jean; Ruggles, Marilyn; Hatton, Carol

    2012-01-01

    New technologies can dramatically change the way people live and work. Jet engines transformed travel. Television revolutionized news and entertainment. Computers and the Internet have transformed just about everything else. And now small video cameras have the potential to transform professional learning. Recognizing the potential of this new…

  1. Machine Learning Approaches for Detecting Diabetic Retinopathy from Clinical and Public Health Records.

    Science.gov (United States)

    Ogunyemi, Omolola; Kermah, Dulcie

    2015-01-01

    Annual eye examinations are recommended for diabetic patients in order to detect diabetic retinopathy and other eye conditions that arise from diabetes. Medically underserved urban communities in the US have annual screening rates that are much lower than the national average and could benefit from informatics approaches to identify unscreened patients most at risk of developing retinopathy. Using clinical data from urban safety net clinics as well as public health data from the CDC's National Health and Nutrition Examination Survey, we examined different machine learning approaches for predicting retinopathy from clinical or public health data. All datasets utilized exhibited a class imbalance. Classifiers learned on the clinical data were modestly predictive of retinopathy with the best model having an AUC of 0.72, sensitivity of 69.2% and specificity of 55.9%. Classifiers learned on public health data were not predictive of retinopathy. Successful approaches to detecting latent retinopathy using machine learning could help safety net and other clinics identify unscreened patients who are most at risk of developing retinopathy and the use of ensemble classifiers on clinical data shows promise for this purpose.

  2. Detecting the temporal structure of sound sequences in newborn infants

    NARCIS (Netherlands)

    Háden, G.P.; Honing, H.; Török, M.; Winkler, I.

    2015-01-01

    Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the

  3. Real, foley or synthetic? An evaluation of everyday walking sounds

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Grani, Francesco

    2013-01-01

    in using foley sounds for a film track. In particular this work focuses on walking sounds: five different scenes of a walking person were video recorded and each video was then mixed with the three different kind of sounds mentioned above. Subjects were asked to recognise and describe the action performed...

  4. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  5. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  6. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  7. Chaotic dynamics of respiratory sounds

    International Nuclear Information System (INIS)

    Ahlstrom, C.; Johansson, A.; Hult, P.; Ask, P.

    2006-01-01

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D 2 ) and the Kaplan-Yorke dimension (D KY ) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data

  8. Chaotic dynamics of respiratory sounds

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, C. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden) and Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)]. E-mail: christer@imt.liu.se; Johansson, A. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Hult, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden); Ask, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)

    2006-09-15

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D {sub 2}) and the Kaplan-Yorke dimension (D {sub KY}) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data.

  9. Seeing 'where' through the ears: effects of learning-by-doing and long-term sensory deprivation on localization based on image-to-sound substitution.

    Directory of Open Access Journals (Sweden)

    Michael J Proulx

    Full Text Available BACKGROUND: Sensory substitution devices for the blind translate inaccessible visual information into a format that intact sensory pathways can process. We here tested image-to-sound conversion-based localization of visual stimuli (LEDs and objects in 13 blindfolded participants. METHODS AND FINDINGS: Subjects were assigned to different roles as a function of two variables: visual deprivation (blindfolded continuously (Bc for 24 hours per day for 21 days; blindfolded for the tests only (Bt and system use (system not used (Sn; system used for tests only (St; system used continuously for 21 days (Sc. The effect of learning-by-doing was assessed by comparing the performance of eight subjects (BtSt who only used the mobile substitution device for the tests, to that of three subjects who, in addition, practiced with it for four hours daily in their normal life (BtSc and BcSc; two subjects who did not use the device at all (BtSn and BcSn allowed assessment of its use in the tasks we employed. The impact of long-term sensory deprivation was investigated by blindfolding three of those participants throughout the three week-long experiment (BcSn, BcSn/c, and BcSc; the other ten subjects were only blindfolded during the tests (BtSn, BtSc, and the eight BtSt subjects. Expectedly, the two subjects who never used the substitution device, while fast in finding the targets, had chance accuracy, whereas subjects who used the device were markedly slower, but showed much better accuracy which improved significantly across our four testing sessions. The three subjects who freely used the device daily as well as during tests were faster and more accurate than those who used it during tests only; however, long-term blindfolding did not notably influence performance. CONCLUSIONS: Together, the results demonstrate that the device allowed blindfolded subjects to increasingly know where something was by listening, and indicate that practice in naturalistic conditions

  10. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seong Su [The Catholic University of Korea, Suwon (Korea, Republic of)

    2007-04-15

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents.

  11. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    International Nuclear Information System (INIS)

    Hwang, Seong Su

    2007-01-01

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents

  12. Specially Designed Sound-Boxes Used by Students to Perform School-Lab Sensor–Based Experiments, to Understand Sound Phenomena

    Directory of Open Access Journals (Sweden)

    Stefanos Parskeuopoulos

    2011-02-01

    Full Text Available The research presented herein investigates and records students’ perceptions relating to sound phenomena and their improvement during a specialised laboratory practice utilizing ICT and a simple experimental apparatus, especially designed for teaching. This school-lab apparatus and its operation are also described herein. A number of 71 first and second grade Vocational-school students, aged 16 to 20, participated in the research. These were divided into groups of 4-5 students, each of which worked for 6 hours in order to complete all activities assigned. Data collection was carried out through personal interviews as well as questionnaires which were distributed before and after the instructive intervention. The results shows that students’ active involvement with the simple teaching apparatus, through which the effects of sound waves are visible, helps them comprehend sound phenomena. It also altered considerably their initial misconceptions about sound propagation. The results are presented diagrammatically herein, while some important observations are made, relating to the teaching and learning of scientific concepts concerning sound.

  13. A Deep Learning Approach to Examine Ischemic ST Changes in Ambulatory ECG Recordings.

    Science.gov (United States)

    Xiao, Ran; Xu, Yuan; Pelter, Michele M; Mortara, David W; Hu, Xiao

    2018-01-01

    Patients with suspected acute coronary syndrome (ACS) are at risk of transient myocardial ischemia (TMI), which could lead to serious morbidity or even mortality. Early detection of myocardial ischemia can reduce damage to heart tissues and improve patient condition. Significant ST change in the electrocardiogram (ECG) is an important marker for detecting myocardial ischemia during the rule-out phase of potential ACS. However, current ECG monitoring software is vastly underused due to excessive false alarms. The present study aims to tackle this problem by combining a novel image-based approach with deep learning techniques to improve the detection accuracy of significant ST depression change. The obtained convolutional neural network (CNN) model yields an average area under the curve (AUC) at 89.6% from an independent testing set. At selected optimal cutoff thresholds, the proposed model yields a mean sensitivity at 84.4% while maintaining specificity at 84.9%.

  14. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  15. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  16. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  17. THE SOUND OF CINEMA: TECHNOLOGY AND CREATIVITY

    Directory of Open Access Journals (Sweden)

    Poznin Vitaly F.

    2017-12-01

    Full Text Available Technology is a means of creating any product. However, in the onscreen art, it is one of the elements creating the art space of film. Considering the main stages of the development of cinematography, this article explores the influence of technology of sound recording on the creating a special artistic and physical space of film (the beginning of the use a sound in movies; the mastering the artistic means of an audiovisual work; the expansion of the spatial characteristics for the screen sound; and the sound in a modern cinema. Today, thanks to new technologies, the sound in a cinema forms a specific quasirealistic landscape, greatly enhancing the impact on the viewer of the virtual screen images.

  18. Visual bias in subjective assessments of automotive sounds

    DEFF Research Database (Denmark)

    Ellermeier, Wolfgang; Legarth, Søren Vase

    2006-01-01

    In order to evaluate how strong the influence of visual input on sound quality evaluation may be, a naive sample of 20 participants was asked to judge interior automotive sound recordings while simultaneously being exposed to pictures of cars. twenty-two recordings of second-gear acceleration...

  19. Taiwan's perspective on electronic medical records' security and privacy protection: lessons learned from HIPAA.

    Science.gov (United States)

    Yang, Che-Ming; Lin, Herng-Ching; Chang, Polun; Jian, Wen-Shan

    2006-06-01

    The protection of patients' health information is a very important concern in the information age. The purpose of this study is to ascertain what constitutes an effective legal framework in protecting both the security and privacy of health information, especially electronic medical records. All sorts of bills regarding electronic medical data protection have been proposed around the world including Health Insurance Portability and Accountability Act (HIPAA) of the U.S. The trend of a centralized bill that focuses on managing computerized health information is the part that needs our further attention. Under the sponsor of Taiwan's Department of Health (DOH), our expert panel drafted the "Medical Information Security and Privacy Protection Guidelines", which identifies nine principles and entails 12 articles, in the hope that medical organizations will have an effective reference in how to manage their medical information in a confidential and secured fashion especially in electronic transactions.

  20. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  1. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  2. Contrast of Hemispheric Lateralization for Oro-Facial Movements between Learned Attention-Getting Sounds and Species-Typical Vocalizations in Chimpanzees: Extension in a Second Colony

    Science.gov (United States)

    Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.

    2012-01-01

    Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for…

  3. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  4. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  5. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  6. The cinematic soundscape: conceptualising the use of sound in Indian films

    OpenAIRE

    Budhaditya Chattopadhyay

    2012-01-01

    This article examines the trajectories of sound practice in Indian cinema and conceptualises the use of sound since the advent of talkies. By studying and analysing a number of sound- films from different technological phases of direct recording, magnetic recording and present- day digital recording, the article proposes three corresponding models that are developed on the basis of observations on the use of sound in Indian cinema. These models take their point of departure in specific phases...

  7. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  8. Recording brain waves at the supermarket: what can we learn from a shopper's brain?

    Science.gov (United States)

    Sands, Stephen F; Sands, J Andrew

    2012-01-01

    cognitive and emotional activity and are complimentary. EEG is more sensitive to time-locked events (i.e., story lines), whereas fMRI is more sensitive to the brain regions involved. The application of neuroscience in BTL campaigns is significantly more difficult to achieve. Participants move unconstrained in a shopping environment while EEG and eye movements are monitored. In this scenario, fMRI is not possible. fMRI can be used with virtual store mock-ups, but it is expensive and seldom used. We have developed a technology that allows for the measurement of EEG in an unobtrusive manner. The intent is to record the brain waves of participants during their day-to-day shopping experience. A miniaturized video recorder, EEG amplifiers, and eye-tracking systems are used. Digital signal processing is employed to remove the substantial artifact generated by eye movements and motion. Eye fixations identify specific viewings of products and displays, and they are used for synchronizing the behavior with EEG response. The location of EEG sources is determined by the use of a source reconstruction software.

  9. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  10. New Applications of Learning Machines

    DEFF Research Database (Denmark)

    Larsen, Jan

    * Machine learning framework for sound search * Genre classification * Music separation * MIMO channel estimation and symbol detection......* Machine learning framework for sound search * Genre classification * Music separation * MIMO channel estimation and symbol detection...

  11. Organizational learning in the implementation and adoption of national electronic health records: case studies of two hospitals participating in the National Programme for Information Technology in England.

    Science.gov (United States)

    Takian, Amirhossein; Sheikh, Aziz; Barber, Nicholas

    2014-09-01

    To explore the role of organizational learning in enabling implementation and supporting adoption of electronic health record systems into two English hospitals. In the course of conducting our prospective and sociotechnical evaluation of the implementation and adoption of electronic health record into 12 "early adopter" hospitals across England, we identified two hospitals implementing virtually identical versions of the same "off-the-shelf" software (Millennium) within a comparable timeframe. We undertook a longitudinal qualitative case study-based analysis of these two hospitals (referred to hereafter as Alpha and Omega) and their implementation experiences. Data included the following: 63 in-depth interviews with various groups of internal and external stakeholders; 41-h on-site observation; and content analysis of 218 documents of various types. Analysis was both inductive and deductive, the latter being informed by the "sociotechnical changing" theoretical perspective. Although Alpha and Omega shared a number of contextual similarities, our evaluation revealed fundamental differences in visions of electronic health record and the implementation strategy between the hospitals, which resulted in distinct local consequences of electronic health record implementation and impacted adoption. Both hospitals did not, during our evaluation, see the hoped-for benefits to the organization as a result of the introduction of electronic health record, such as speeding-up tasks. Nonetheless, the Millennium software worked out to be easier to use at Omega. Interorganizational learning was at the heart of this difference. Despite the turbulent overall national "roll out" of electronic health record systems into the English hospitals, considerable opportunities for organizational learning were offered by sequential delivery of the electronic health record software into "early adopter" hospitals. We argue that understanding the process of organizational learning and its

  12. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  13. Paper-Based Medical Records: the Challenges and Lessons Learned from Studying Obstetrics and Gynaecological Post-Operation Records in a Nigerian Hospital

    Directory of Open Access Journals (Sweden)

    Adekunle Yisau Abdulkadir

    2010-10-01

    Full Text Available AIM: With the background knowledge that auditing of Medical Records (MR for adequacy and completeness is necessary if it is to be useful and reliable in continuing patient care; protection of the legal interest of the patient, physicians, and the Hospital; and meeting requirements for researches, we scrutinized theatre records of our hospital to identify routine omissions or deficiencies, and correctable errors in our MR system. METHOD: Obstetrics and Gynaecological post operation theatre records between January 2006 and December 2008 were quantitatively and qualitatively analyzed for details that included: hospital number; Patients age; diagnosis; surgery performed; types and modes of anesthesia; date of surgery; patients’ ward; Anesthetists names; surgeons and attending nurses names, and abbreviations used with SPSS 15.0 for Windows. RESULTS: Hardly were any of the 1270 surgeries during the study period documented without an omission or an abbreviation. Hospital numbers and patients’ age were not documented in 21.8% (n=277 and 59.1% (n=750 respectively. Diagnoses and surgeries were recorded with varying abbreviations in about 96% of instances. Surgical team names were mostly abbreviated or initials only given. CONCLUSION: To improve the quality of Paper-based Medical Record, regular auditing, training and good orientation of medical personnel for good record practices, and discouraging large volume record book to reduce paper damages and sheet loss from handling are necessary else what we record toady may neither be useful nor available tomorrow. [TAF Prev Med Bull 2010; 9(5.000: 427-432

  14. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  15. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  16. Misconceptions About Sound Among Engineering Students

    Science.gov (United States)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  17. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  18. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  19. Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations

    Science.gov (United States)

    Zheng, Jiaping; Yu, Hong

    2016-01-01

    Background Many health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients’ notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care. Objective We aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients. Methods First, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians’ agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems. Results Physicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen’s kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term

  20. The Effect of a Learning Environment Using an Electronic Health Record (EHR) on Undergraduate Nursing Students' Behaviorial Intention to Use an EHR

    Science.gov (United States)

    Foley, Shawn

    2011-01-01

    The purpose of this study was to explore the effect of a learning environment using an Electronic Health Record (EHR) on undergraduate nursing students' behavioral intention (BI) to use an EHR. BI is defined by Davis (1989) in the Technology Acceptance Model (TAM) as the degree to which a person has formulated conscious plans to perform or not…

  1. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  2. EUVS Sounding Rocket Payload

    Science.gov (United States)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  3. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  4. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2010-01-01

    We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions, while soundscapes reproduce the characteristic soundmarks...... as well as self-induced interactive sounds simulated using physical models. Results show that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment....

  5. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  6. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  7. SOUND VELOCITY and Other Data from USS STUMP DD-978) (NCEI Accession 9400069)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The sound velocity data in this accession were collected from USS STUMP DD-978 by US Navy. The sound velocity in water is analog profiles data that was recorded in...

  8. The influence of meaning on the perception of speech sounds.

    Science.gov (United States)

    Kazanina, Nina; Phillips, Colin; Idsardi, William

    2006-07-25

    As part of knowledge of language, an adult speaker possesses information on which sounds are used in the language and on the distribution of these sounds in a multidimensional acoustic space. However, a speaker must know not only the sound categories of his language but also the functional significance of these categories, in particular, which sound contrasts are relevant for storing words in memory and which sound contrasts are not. Using magnetoencephalographic brain recordings with speakers of Russian and Korean, we demonstrate that a speaker's perceptual space, as reflected in early auditory brain responses, is shaped not only by bottom-up analysis of the distribution of sounds in his language but also by more abstract analysis of the functional significance of those sounds.

  9. Composing Sound Identity in Taiko Drumming

    Science.gov (United States)

    Powell, Kimberly A.

    2012-01-01

    Although sociocultural theories emphasize the mutually constitutive nature of persons, activity, and environment, little attention has been paid to environmental features organized across sensory dimensions. I examine sound as a dimension of learning and practice, an organizing presence that connects the sonic with the social. This ethnographic…

  10. Redesigning Space for Interdisciplinary Connections: The Puget Sound Science Center

    Science.gov (United States)

    DeMarais, Alyce; Narum, Jeanne L.; Wolfson, Adele J.

    2013-01-01

    Mindful design of learning spaces can provide an avenue for supporting student engagement in STEM subjects. Thoughtful planning and wide participation in the design process were key in shaping new and renovated spaces for the STEM community at the University of Puget Sound. The finished project incorporated Puget Sound's mission and goals as well…

  11. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  12. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  13. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  14. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  15. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  16. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  17. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  18. Symbolic Play and Novel Noun Learning in Deaf and Hearing Children: Longitudinal Effects of Access to Sound on Early Precursors of Language

    Science.gov (United States)

    Quittner, Alexandra L.; Cejas, Ivette; Wang, Nae-Yuh; Niparko, John K.; Barker, David H.

    2016-01-01

    In the largest, longitudinal study of young, deaf children before and three years after cochlear implantation, we compared symbolic play and novel noun learning to age-matched hearing peers. Participants were 180 children from six cochlear implant centers and 96 hearing children. Symbolic play was measured during five minutes of videotaped, structured solitary play. Play was coded as "symbolic" if the child used substitution (e.g., a wooden block as a bed). Novel noun learning was measured in 10 trials using a novel object and a distractor. Cochlear implant vs. normal hearing children were delayed in their use of symbolic play, however, those implanted before vs. after age two performed significantly better. Children with cochlear implants were also delayed in novel noun learning (median delay 1.54 years), with minimal evidence of catch-up growth. Quality of parent-child interactions was positively related to performance on the novel noun learning, but not symbolic play task. Early implantation was beneficial for both achievement of symbolic play and novel noun learning. Further, maternal sensitivity and linguistic stimulation by parents positively affected noun learning skills, although children with cochlear implants still lagged in comparison to hearing peers. PMID:27228032

  19. Symbolic Play and Novel Noun Learning in Deaf and Hearing Children: Longitudinal Effects of Access to Sound on Early Precursors of Language.

    Science.gov (United States)

    Quittner, Alexandra L; Cejas, Ivette; Wang, Nae-Yuh; Niparko, John K; Barker, David H

    2016-01-01

    In the largest, longitudinal study of young, deaf children before and three years after cochlear implantation, we compared symbolic play and novel noun learning to age-matched hearing peers. Participants were 180 children from six cochlear implant centers and 96 hearing children. Symbolic play was measured during five minutes of videotaped, structured solitary play. Play was coded as "symbolic" if the child used substitution (e.g., a wooden block as a bed). Novel noun learning was measured in 10 trials using a novel object and a distractor. Cochlear implant vs. normal hearing children were delayed in their use of symbolic play, however, those implanted before vs. after age two performed significantly better. Children with cochlear implants were also delayed in novel noun learning (median delay 1.54 years), with minimal evidence of catch-up growth. Quality of parent-child interactions was positively related to performance on the novel noun learning, but not symbolic play task. Early implantation was beneficial for both achievement of symbolic play and novel noun learning. Further, maternal sensitivity and linguistic stimulation by parents positively affected noun learning skills, although children with cochlear implants still lagged in comparison to hearing peers.

  20. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  1. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  2. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  3. A Regularized Deep Learning Approach for Clinical Risk Prediction of Acute Coronary Syndrome Using Electronic Health Records.

    Science.gov (United States)

    Huang, Zhengxing; Dong, Wei; Duan, Huilong; Liu, Jiquan

    2018-05-01

    Acute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation. This study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result. We validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively. The obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain.

  4. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  5. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  6. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    Science.gov (United States)

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  7. The Role of the Syllable in Foreign Language Learning: Improving Oral Production through Dual-Coded, Sound-Synchronised, Typographic Annotations

    Science.gov (United States)

    Stenton, Anthony

    2013-01-01

    The CNRS-financed authoring system SWANS (Synchronised Web Authoring Notation System), now used in several CercleS centres, was developed by teams from four laboratories as a personalised learning tool for the purpose of making available knowledge about lexical stress patterns and mother-tongue interference in L2 speech production--helping…

  8. Going on Safari: The Design and Development of an Early Years Literacy iPad Application to Support Letter-Sound Learning

    Science.gov (United States)

    McKenzie, Sophie; Spence, Aaron; Nicholas, Maria

    2018-01-01

    This paper explores the design, development and evaluation of an early childhood literacy iPad application, focusing on the English Alphabet, called "A to Z Safari" trialled in Australian classrooms. A to Z Safari was designed to assist students in the early years of schooling with learning the alphabet and building on their knowledge of…

  9. Sound transmission reduction with intelligent panel systems

    Science.gov (United States)

    Fuller, Chris R.; Clark, Robert L.

    1992-01-01

    Experimental and theoretical investigations are performed of the use of intelligent panel systems to control the sound transmission and radiation. An intelligent structure is defined as a structural system with integrated actuators and sensors under the guidance of an adaptive, learning type controller. The system configuration is based on the Active Structural Acoustic Control (ASAC) concept where control inputs are applied directly to the structure to minimize an error quantity related to the radiated sound field. In this case multiple piezoelectric elements are employed as sensors. The importance of optimal shape and location is demonstrated to be of the same order of influence as increasing the number of channels of control.

  10. Physics and music the science of musical sound

    CERN Document Server

    White, Harvey E

    2014-01-01

    Comprehensive and accessible, this foundational text surveys general principles of sound, musical scales, characteristics of instruments, mechanical and electronic recording devices, and many other topics. More than 300 illustrations plus questions, problems, and projects.

  11. Nursing students' self-evaluation using a video recording of foley catheterization: effects on students' competence, communication skills, and learning motivation.

    Science.gov (United States)

    Yoo, Moon Sook; Yoo, Il Young; Lee, Hyejung

    2010-07-01

    An opportunity for a student to evaluate his or her own performance enhances self-awareness and promotes self-directed learning. Using three outcome measures of competency of procedure, communication skills, and learning motivation, the effects of self-evaluation using a video recording of the student's Foley catheterization was investigated in this study. The students in the experimental group (n = 20) evaluated their Foley catheterization performance by reviewing the video recordings of their own performance, whereas students in the control group (n = 20) received written evaluation guidelines only. The results showed that the students in the experimental group had better scores on competency (p communication skills (p performance developed by reviewing a videotape appears to increase the competency of clinical skills in nursing students. Copyright 2010, SLACK Incorporated.

  12. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  13. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  14. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  15. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  16. Sound Visualization and Holography

    Science.gov (United States)

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  17. Modern recording techniques

    CERN Document Server

    Huber, David Miles

    2013-01-01

    As the most popular and authoritative guide to recording Modern Recording Techniques provides everything you need to master the tools and day to day practice of music recording and production. From room acoustics and running a session to mic placement and designing a studio Modern Recording Techniques will give you a really good grounding in the theory and industry practice. Expanded to include the latest digital audio technology the 7th edition now includes sections on podcasting, new surround sound formats and HD and audio.If you are just starting out or looking for a step up

  18. Jamming and learning

    DEFF Research Database (Denmark)

    Brinck, Lars

    2017-01-01

    -academy students ‘sitting in’. Fieldwork was documented through sound recordings, diaries, and field notes from participant observation and informal interviews. Analyses apply a situated learning theoretical perspective on the band members’ as well as the students’ participation and reveal important learning...... to take place. Analyses also indicate the musicians’ changing participation being analytically inseparable from the changing music itself. The study’s final argument is two-fold: Revitalising jamming as a studio-recording practice within popular music highlights important aspects of professional musicians......’ interactive communication processes. And transferring this artistic endeavour into an educational practice suggests an increased focus on students ‘sitting in’ with professional bands, and teachers playing alongside with students....

  19. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  20. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  1. A Survey of First-Year Biology Student Opinions Regarding Live Lectures and Recorded Lectures as Learning Tools

    Science.gov (United States)

    Simcock, D. C.; Chua, W. H.; Hekman, M.; Levin, M. T.; Brown, S.

    2017-01-01

    A cohort of first-year biology students was surveyed regarding their opinions and viewing habits for live and recorded lectures. Most respondents (87%) attended live lectures as a rule (attenders), with 66% attending more than two-thirds of the lectures. In contrast, only 52% accessed recordings and only 13% viewed more than two-thirds of the…

  2. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  3. Investigating the relationship between pressure force and acoustic waveform in footstep sounds

    DEFF Research Database (Denmark)

    Grani, Francesco; Serafin, Stefania; Götzen, Amalia De

    2013-01-01

    In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair o...... of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.......In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair...

  4. The effects of using flashcards with reading racetrack to teach letter sounds, sight words, and math facts to elementary students with learning disabilities

    Directory of Open Access Journals (Sweden)

    Rachel Erbey

    2011-07-01

    Full Text Available The purpose of this study was to measure the effects of reading racetrack and flashcards when teaching phonics, sight words, and addition facts. The participants for the sight word and phonics portion of this study were two seven-year-old boys in the second grade. Both participants were diagnosed with a learning disability. The third participant was diagnosed with attention deficit hyperactivity disorder by his pediatrician and with a learning disability and traumatic brain injury by his school’s multi-disciplinary team.. The dependent measures were corrects and errors when reading from a first grade level sight word list. Math facts were selected based on a 100 add fact test for the third participant. The study demonstrated that racetracks paired with the flashcard intervention improved the students’ number of corrects for each subject-matter area (phonics, sight words, and math facts. However, the results show that some students had more success with it than others. These outcomes clearly warrant further research.

  5. Brain activation during anticipation of sound sequences.

    Science.gov (United States)

    Leaver, Amber M; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R; Rauschecker, Josef P

    2009-02-25

    Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive "anticipatory imagery" at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in "training" frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.

  6. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  7. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  8. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  9. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  10. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  11. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  12. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  13. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  14. Otolith research for Puget Sound

    Science.gov (United States)

    Larsen, K.; Reisenbichler, R.

    2007-01-01

    Otoliths are hard structures located in the brain cavity of fish. These structures are formed by a buildup of calcium carbonate within a gelatinous matrix that produces light and dark bands similar to the growth rings in trees. The width of the bands corresponds to environmental factors such as temperature and food availability. As juvenile salmon encounter different environments in their migration to sea, they produce growth increments of varying widths and visible 'checks' corresponding to times of stress or change. The resulting pattern of band variations and check marks leave a record of fish growth and residence time in each habitat type. This information helps Puget Sound restoration by determining the importance of different habitats for the optimal health and management of different salmon populations. The USGS Western Fisheries Research Center (WFRC) provides otolith research findings directly to resource managers who put this information to work.

  15. Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.

    Science.gov (United States)

    Rauterberg, M.; And Others

    Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…

  16. Heart sounds analysis using probability assessment

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

    2017-01-01

    Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

  17. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  18. 12 CFR 1732.7 - Record hold.

    Science.gov (United States)

    2010-01-01

    ... Banking OFFICE OF FEDERAL HOUSING ENTERPRISE OVERSIGHT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SAFETY AND SOUNDNESS RECORD RETENTION Record Retention Program § 1732.7 Record hold. (a) Definition. For... Enterprise or OFHEO that the Enterprise is to retain records relating to a particular issue in connection...

  19. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  20. Usefulness of bowel sound auscultation: a prospective evaluation.

    Science.gov (United States)

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  1. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  2. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  3. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  4. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  5. See This Sound

    DEFF Research Database (Denmark)

    Kristensen, Thomas Bjørnsten

    2009-01-01

    Anmeldelse af udstillingen See This Sound på Lentos Kunstmuseum Linz, Østrig, som markerer den foreløbige kulmination på et samarbejde mellem Lentos Kunstmuseum og Ludwig Boltzmann Institute Media.Art.Research. Udover den konkrete udstilling er samarbejdet tænkt som en ambitiøs, tværfaglig...

  6. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  7. Sound of Stockholm

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2013-01-01

    Med sine kun 4 år bag sig er Sound of Stockholm relativt ny i det internationale festival-landskab. Festivalen er efter sigende udsprunget af en større eller mindre frustration over, at den svenske eksperimentelle musikscenes forskellige foreninger og organisationer gik hinanden bedene, og...

  8. Making Sense of Sound

    Science.gov (United States)

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  9. The Sounds of Metal

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    2015-01-01

    Two, I propose that this framework allows for at least a theoretical distinction between the way in which extreme metal – e.g. black metal, doom metal, funeral doom metal, death metal – relates to its sound as music and the way in which much other music may be conceived of as being constituted...

  10. The Universe of Sound

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Sound Scultor, Bill Fontana, the second winner of the Prix Ars Electronica Collide@CERN residency award, and his science inspiration partner, CERN cosmologist Subodh Patil, present their work in art and science at the CERN Globe of Science and Innovation on 4 July 2013 at 19:00.

  11. Urban Sound Ecologies

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2013-01-01

    . The article concludes that the ways in which recent sound installations work with urban ecologies vary. While two of the examples blend into the urban environment, the other transfers the concert format and its mode of listening to urban space. Last, and in accordance with recent soundscape research, we point...

  12. 浅谈高职院校录音专业毕业生从事音乐编辑行业的职业定位%A Brief Discussion on the Career Orientation of Higher Vocational Sound Recording Graduates Engaged in Music Edition

    Institute of Scientific and Technical Information of China (English)

    郑晓钰

    2015-01-01

    文章分析了高职院校录音专业学生,毕业后从事音乐编辑工种的职业定位.从毕业生自身技能出发,剖析毕业生所具备的技能不同、就业趋向不同,从事的音乐编辑工种的差异,提出着重培养"专一型"和"通才型"的专业技术人才.%This paper analyzes the career orientation of higher vo-cational sound recording graduates engaged in music edition. Starting from the individual skills of the graduates, this paper points out that different skills determine different careers and the types of music edition are also different. Then the writer proposes the emphasis on cultivating both "specialized" and "interdisci-plinary"talents with professional skills.

  13. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  14. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  15. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  16. Sounds of a Star

    Science.gov (United States)

    2001-06-01

    colours show element displacements in opposite directions. Geologists monitor how seismic waves generated by earthquakes propagate through the Earth, and thus learn about the inner structure of our planet. The same technique works for stars. The Sun, our nearest star and a typical middle-age member of its class, has been investigated in this way since the 1960's. With "solar seismology" , astronomers have been able to learn much about the inner parts of the star, and not only the outer layers normally visible to the telescopes. In the Sun, heat is bubbling up from the central regions where enormous amount of energy is created by nuclear reactions . In the so-called convective zone , the gas is virtually boiling, and hot gas-bubbles are rising with a speed that is close to that of sound. Much like you can hear when water starts to boil, the turbulent convection in the Sun creates noise . These sound waves then propagate through the solar interior and are reflected on the surface, making it oscillate. This "ringing" is well observed in the Sun, where the amplitude and frequency of the oscillations provide astronomers with plenty of information about the physical conditions in the solar interior. From the Sun to the stars There is every reason to believe that our Sun is a quite normal star of its type. Other stars that are similar to the Sun are therefore likely to pulsate in much the same way as the Sun. The search for such oscillations in other solar-like stars has, however, been a long and difficult one. The problem is simply that the pulsations are tiny, so very great precision is needed in the measurements. However, the last few years have seen considerable progress in asteroseismology, and François Bouchy and Fabien Carrier from the Geneva Observatory have now been able to detect unambiguous acoustic oscillations in the Solar-twin star, Alpha Centauri A. The bright and nearby star Alpha Centauri Alpha Centauri (Alpha Cen) [1] is the brightest star in the constellation

  17. Recognition and characterization of unstructured environmental sounds

    Science.gov (United States)

    Chu, Selina

    2011-12-01

    be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to

  18. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  19. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  20. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  1. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  2. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  3. Wood for sound.

    Science.gov (United States)

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  4. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  5. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  6. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  7. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  8. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  9. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  10. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  11. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  12. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  13. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  15. Extracting meaning from audio signals - a machine learning approach

    DEFF Research Database (Denmark)

    Larsen, Jan

    2007-01-01

    * Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression......* Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression...

  16. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  17. Digitizing Sound: How Can Sound Waves be Turned into Ones and Zeros?

    Science.gov (United States)

    Vick, Matthew

    2010-10-01

    From MP3 players to cell phones to computer games, we're surrounded by a constant stream of ones and zeros. Do we really need to know how this technology works? While nobody can understand everything, digital technology is increasingly making our lives a collection of "black boxes" that we can use but have no idea how they work. Pursuing scientific literacy should propel us to open up a few of these metaphorical boxes. High school physics offers opportunities to connect the curriculum to sports, art, music, and electricity, but it also offers connections to computers and digital music. Learning activities about digitizing sounds offer wonderful opportunities for technology integration and student problem solving. I used this series of lessons in high school physics after teaching about waves and sound but before optics and total internal reflection so that the concepts could be further extended when learning about fiber optics.

  18. Onboard Acoustic Recording from Diving Elephant Seals

    National Research Council Canada - National Science Library

    Fletcher, Stacia

    1996-01-01

    The aim of this project was to record sounds impinging on free-ranging northern elephant seals, Mirounga angustirostris, a first step in determining the importance of LFS to these animals as they dive...

  19. Do medical students generate sound arguments during small group discussions in problem-based learning?: an analysis of preclinical medical students' argumentation according to a framework of hypothetico-deductive reasoning.

    Science.gov (United States)

    Ju, Hyunjung; Choi, Ikseon; Yoon, Bo Young

    2017-06-01

    Hypothetico-deductive reasoning (HDR) is an essential learning activity and a learning outcome in problem-based learning (PBL). It is important for medical students to engage in the HDR process through argumentation during their small group discussions in PBL. This study aimed to analyze the quality of preclinical medical students' argumentation according to each phase of HDR in PBL. Participants were 15 first-year preclinical students divided into two small groups. A set of three 2-hour discussion sessions from each of the two groups during a 1-week-long PBL unit on the cardiovascular system was audio-recorded. The arguments constructed by the students were analyzed using a coding scheme, which included four types of argumentation (Type 0: incomplete, Type 1: claim only, Type 2: claim with data, and Type 3: claim with data and warrant). The mean frequency of each type of argumentation according to each HDR phase across the two small groups was calculated. During small group discussions, Type 1 arguments were generated most often (frequency=120.5, 43%), whereas the least common were Type 3 arguments (frequency=24.5, 8.7%) among the four types of arguments. The results of this study revealed that the students predominantly made claims without proper justifications; they often omitted data for supporting their claims or did not provide warrants to connect the claims and data. The findings suggest instructional interventions to enhance the quality of medical students' arguments in PBL, including promoting students' comprehension of the structure of argumentation for HDR processes and questioning.

  20. The role of the intrinsic cholinergic system of the striatum: What have we learned from TAN recordings in behaving animals?

    Science.gov (United States)

    Apicella, Paul

    2017-09-30

    Cholinergic interneurons provide rich local innervation of the striatum and play an important role in controlling behavior, as evidenced by the variety of movement and psychiatric disorders linked to disrupted striatal cholinergic transmission. Much progress has been made in recent years regarding our understanding of how these interneurons contribute to the processing of information in the striatum. In particular, investigation of the activity of presumed striatal cholinergic interneurons, identified as tonically active neurons or TANs in behaving animals, has pointed to their role in the signaling and learning of the motivational relevance of environmental stimuli. Although the bulk of this work has been conducted in monkeys, several studies have also been carried out in behaving rats, but information remains rather disparate across studies and it is still questionable whether rodent TANs correspond to TANs described in monkeys. Consequently, our current understanding of the function of cholinergic transmission in the striatum is challenged by the rapidly growing, but often confusing literature on the relationship between TAN activity and specific behaviors. As regards the precise nature of the information conveyed by the cholinergic TANs, a recent influential view emphasized that these local circuit neurons may play a special role in the processing of contextual information that is important for reinforcement learning and selection of appropriate actions. This review provides a summary of recent progress in TAN physiology from which it is proposed that striatal cholinergic interneurons are crucial elements for flexible switching of behaviors under changing environmental conditions. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Magnetospheric radio sounding

    International Nuclear Information System (INIS)

    Ondoh, Tadanori; Nakamura, Yoshikatsu; Koseki, Teruo; Watanabe, Sigeaki; Murakami, Toshimitsu

    1977-01-01

    Radio sounding of the plasmapause from a geostationary satellite has been investigated to observe time variations of the plasmapause structure and effects of the plasma convection. In the equatorial plane, the plasmapause is located, on the average, at 4 R sub(E) (R sub(E); Earth radius), and the plasma density drops outwards from 10 2 -10 3 /cm 3 to 1-10/cm 3 in the plasmapause width of about 600 km. Plasmagrams showing a relation between the virtual range and sounding frequencies are computed by ray tracing of LF-VLF waves transmitted from a geostationary satellite, using model distributions of the electron density in the vicinity of the plasmapause. The general features of the plasmagrams are similar to the topside ionograms. The plasmagram has no penetration frequency such as f 0 F 2 , but the virtual range of the plasmagram increases rapidly with frequency above 100 kHz, since the distance between a satellite and wave reflection point increases rapidly with increasing the electron density inside the plasmapause. The plasmapause sounder on a geostationary satellite has been designed by taking account of an average propagation distance of 2 x 2.6 R sub(E) between a satellite (6.6 R sub(E)) and the plasmapause (4.0 R sub(E)), background noise, range resolution, power consumption, and receiver S/N of 10 dB. The 13-bit Barker coded pulses of baud length of 0.5 msec should be transmitted in direction parallel to the orbital plane at frequencies for 10 kHz-2MHz in a pulse interval of 0.5 sec. The transmitter peak power of 70 watts and 700 watts are required respectively in geomagnetically quiet and disturbed (strong nonthermal continuum emissions) conditions for a 400 meter cylindrical dipole of 1.2 cm diameter on the geostationary satellite. This technique will open new area of radio sounding in the magnetosphere. (auth.)

  2. Records Management And Private Sector Organizations | Mnjama ...

    African Journals Online (AJOL)

    This article begins by examining the role of records management in private organizations. It identifies the major reason why organizations ought to manage their records effectively and efficiently. Its major emphasis is that a sound records management programme is a pre-requisite to quality management system programme ...

  3. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  4. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audience´s...... interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  5. JINGLE: THE SOUNDING SYMBOL

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2013-12-01

    Full Text Available The article considers the role of jingles in the industrial era, from the occurrence of the regular radio broadcasting, sound films and television up of modern video games, audio and video podcasts, online broadcasts, and mobile communications. Jingles are researched from the point of view of the theory of symbols: the forward motion is detected in the process of development of jingles from the social symbols (radio callsigns to the individual signs-images (ringtones. The role of technical progress in the formation of jingles as important cultural audio elements of modern digital civilization.

  6. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  7. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  8. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  9. Sounds like Team Spirit

    Science.gov (United States)

    Hoffman, Edward

    2002-01-01

    I recently accompanied my son Dan to one of his guitar lessons. As I sat in a separate room, I focused on the music he was playing and the beautiful, robust sound that comes from a well-played guitar. Later that night, I woke up around 3 am. I tend to have my best thoughts at this hour. The trouble is I usually roll over and fall back asleep. This time I was still awake an hour later, so I got up and jotted some notes down in my study. I was thinking about the pure, honest sound of a well-played instrument. From there my mind wandered into the realm of high-performance teams and successful projects. (I know this sounds weird, but this is the sort of thing I think about at 3 am. Maybe you have your own weird thoughts around that time.) Consider a team in relation to music. It seems to me that a crack team can achieve a beautiful, perfect unity in the same way that a band of brilliant musicians can when they're in harmony with one another. With more than a little satisfaction I have to admit, I started to think about the great work performed for you by the Knowledge Sharing team, including this magazine you are reading. Over the past two years I personally have received some of my greatest pleasures as the APPL Director from the Knowledge Sharing activities - the Masters Forums, NASA Center visits, ASK Magazine. The Knowledge Sharing team expresses such passion for their work, just like great musicians convey their passion in the music they play. In the case of Knowledge Sharing, there are many factors that have made this so enjoyable (and hopefully worthwhile for NASA). Three ingredients come to mind -- ingredients that have produced a signature sound. First, through the crazy, passionate playing of Alex Laufer, Michelle Collins, Denise Lee, and Todd Post, I always know that something startling and original is going to come out of their activities. This team has consistently done things that are unique and innovative. For me, best of all is that they are always

  10. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm

    OpenAIRE

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and...

  11. Towards parameter-free classification of sound effects in movies

    Science.gov (United States)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  12. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  13. Slit-lamp management in contact lenses laboratory classes: learning upgrade with monitor visualization of webcam video recordings

    Science.gov (United States)

    Arines, Justo; Gargallo, Ana

    2014-07-01

    The training in the use of the slit lamp has always been difficult for students of the degree in Optics and Optometry. Instruments with associated cameras helps a lot in this task, they allow teachers to observe and control if the students evaluate the eye health appropriately, correct use errors and show them how to do it with a visual demonstration. However, these devices are more expensive than those that do not have an integrated camera connected to a display unit. With the aim to improve students' skills in the management of slit lamp, we have adapted USB HD webcams (Microsoft Lifecam HD-5000) to the objectives of the slit lamps available in our contact lenses laboratory room. The webcams are connected to a PC running Linux Ubuntu 11.0; therefore that is a low-cost device. Our experience shows that single method has several advantages. It allows us to take pictures with a good quality of different conditions of the eye health; we can record videos of eye evaluation and make demonstrations of the instrument. Besides it increases the interactions between students because they could see what their colleagues are doing and take conscious of the mistakes, helping and correcting each others. It is a useful tool in the practical exam too. We think that the method supports the training in optometry practice and increase the students' confidence without a huge outlay.

  14. Learning how to rate video-recorded therapy sessions: a practical guide for trainees and advanced clinicians.

    Science.gov (United States)

    McCullough, Leigh; Bhatia, Maneet; Ulvenes, Pal; Berggraf, Lene; Osborn, Kristin

    2011-06-01

    Watching and rating psychotherapy sessions is an important yet often overlooked component of psychotherapy training. This article provides a simple and straightforward guide for using one Website (www.ATOStrainer.com) that provides an automated training protocol for rating of psychotherapy sessions. By the end of the article, readers will be able to have the knowledge to go to the Website and begin using this training method as soon as they have a recorded session to view. This article presents, (a) an overview of the Achievement of Therapeutic Objectives Scale (ATOS; McCullough et al., 2003a), a research tool used to rate psychotherapy sessions; (b) a description of APA training tapes, available for purchase from APA Books, that have been rated and scored by ATOS trained clinicians and posted on the Website; (c) step-by-step procedures on how ratings can be done; (d) an introduction to www.ATOStrainer.com where ratings can be entered and compared with expert ratings; and (e) first-hand personal experiences of the authors using this training method and the benefits it affords both trainees and experienced therapists. This psychotherapy training Website has the potential to be a key resource tool for graduate students, researchers, and clinicians. Our long-range goal is to promote the growth of our understanding of psychotherapy and to improve the quality of psychotherapy provided for patients.

  15. Challenges in using electronic health record data for CER: experience of 4 learning organizations and solutions applied.

    Science.gov (United States)

    Bayley, K Bruce; Belnap, Tom; Savitz, Lucy; Masica, Andrew L; Shah, Nilay; Fleming, Neil S

    2013-08-01

    To document the strengths and challenges of using electronic health records (EHRs) for comparative effectiveness research (CER). A replicated case study of comparative effectiveness in hypertension treatment was conducted across 4 health systems, with instructions to extract data and document problems encountered using a specified list of required data elements. Researchers at each health system documented successes and challenges, and suggested solutions for addressing challenges. Data challenges fell into 5 categories: missing data, erroneous data, uninterpretable data, inconsistencies among providers and over time, and data stored in noncoded text notes. Suggested strategies to address these issues include data validation steps, use of surrogate markers, natural language processing, and statistical techniques. A number of EHR issues can hamper the extraction of valid data for cross-health system comparative effectiveness studies. Our case example cautions against a blind reliance on EHR data as a single definitive data source. Nevertheless, EHR data are superior to administrative or claims data alone, and are cheaper and timelier than clinical trials or manual chart reviews. All 4 participating health systems are pursuing pathways to more effectively use EHR data for CER.A partnership between clinicians, researchers, and information technology specialists is encouraged as a way to capitalize on the wealth of information contained in the EHR. Future developments in both technology and care delivery hold promise for improvement in the ability to use EHR data for CER.

  16. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  17. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  18. Jordan Banks Financial Soundness Indicators

    Directory of Open Access Journals (Sweden)

    Imad Kutum

    2015-09-01

    Full Text Available The aim of this research paper is to examine the Jordanian banks using financial soundness indicators. This is to establish if Jordanian banks were affected because of the 2007/2008 financial crisis and determine the underlying reasons. The research paper was conducted on 25 banks in Jordan listed in the countries securities exchange. The research methodology used consisted of examining the banks financial records in order to derive four crucial Basel III ratio such as the capital adequacy ratio, the leverage ratio, the liquidity ratio and finally the Total Provisions (As % Of Non-Performing Loans %. The results revealed that out of the four hypotheses under examination Jordan Banks do not meet Basel financial Indicators for Capital Adequacy Ratio, Jordan Banks does not meet Basel financial Indicators for Liquidity Ratio , Jordan Banks do not meet Basel financial Indicators for Leverage Ratio and Jordan Banks do not meet Basel financial Indicators for Total Provisions (As % Of Non-Performing Loans ratio. Only one hypothesis was accepted based on the research outcomes. The rest of the hypothesis was rejected since the average trend line did not go below the Basel III required ratio level. The general outcome of the research revealed that Jordanian banks were not affected significantly by the financial crisis.

  19. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  20. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  1. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  2. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  3. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  4. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  5. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  6. Sound, memory and interruption

    DEFF Research Database (Denmark)

    Pinder, David

    2016-01-01

    This chapter considers how art can interrupt the times and spaces of urban development so they might be imagined, experienced and understood differently. It focuses on the construction of the M11 Link Road through north-east London during the 1990s that demolished hundreds of homes and displaced...... around a thousand people. The highway was strongly resisted and it became the site of one of the country’s longest and largest anti-road struggles. The chapter addresses specifically Graeme Miller’s sound walk LINKED (2003), which for more than a decade has been broadcasting memories and stories...... of people who were violently displaced by the road as well as those who actively sought to halt it. Attention is given to the walk’s interruption of senses of the given and inevitable in two main ways. The first is in relation to the pace of the work and its deployment of slowness and arrest in a context...

  7. Recycling Sounds in Commercials

    DEFF Research Database (Denmark)

    Larsen, Charlotte Rørdam

    2012-01-01

    Commercials offer the opportunity for intergenerational memory and impinge on cultural memory. TV commercials for foodstuffs often make reference to past times as a way of authenticating products. This is frequently achieved using visual cues, but in this paper I would like to demonstrate how...... such references to the past and ‘the good old days’ can be achieved through sounds. In particular, I will look at commercials for Danish non-dairy spreads, especially for OMA margarine. These commercials are notable in that they contain a melody and a slogan – ‘Say the name: OMA margarine’ – that have basically...... remained the same for 70 years. Together these identifiers make OMA an interesting Danish case to study. With reference to Ann Rigney’s memorial practices or mechanisms, the study aims to demonstrate how the auditory aspects of Danish margarine commercials for frying tend to be limited in variety...

  8. The sounds of science

    Science.gov (United States)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  9. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  10. Automatic adventitious respiratory sound analysis: A systematic review.

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    .69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  11. Role of sound stimulation in reprogramming brain connectivity

    Indian Academy of Sciences (India)

    2013-07-17

    Jul 17, 2013 ... higher brain functions such as learning and memory in birds and mammals. ... Sound at an optimum level for a short period may act as an auditory stimulus to ... This could lead to long-term plasticity, which allows fine tuning to ...

  12. Role of Head Teachers in Ensuring Sound Climate

    Science.gov (United States)

    Kor, Jacob; Opare, James K.

    2017-01-01

    The school climate is outlined in literature as one of the most important within school factors required for effective teaching in learning. As leaders in any organisations are assigned the role of ensuring sound climates for work, head teachers also have the task of creating and maintaining an environment conducive for effective academic work…

  13. Video and Sound Production: Flip out! Game on!

    Science.gov (United States)

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  14. System complexity and (im)possible sound changes

    NARCIS (Netherlands)

    Seinhorst, K.T.

    2016-01-01

    In the acquisition of phonological patterns, learners tend to considerably reduce the complexity of their input. This learning bias may also constrain the set of possible sound changes, which might be expected to contain only those changes that do not increase the complexity of the system. However,

  15. The Development of Spelling-Sound Relationships in a Model of Phonological Reading.

    Science.gov (United States)

    Zorzi, Marco; Houghton, George; Butterworth, Brian

    1998-01-01

    Developmental aspects of spelling-to-sound mapping for English monosyllabic words are investigated with a simple two-layer network model using a simple, general learning rule. The model is trained on both regularly and irregularly spelled words but extracts regular spelling to sound relationships, which it can apply to new words. Training-related…

  16. Early Morphology and Recurring Sound Patterns

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Basbøll, Hans; Lambertsen, Claus

    Corpus is a longitudinal corpus of spontaneous Child Speech and Child Directed Speech recorded in the children's homes in interaction with their parents or caretaker and transcribed in CHILDES (MacWhinney 2007 a, b), supplemented by parts of Kim Plunkett's Danish corpus (CHILDES) (Plunkett 1985, 1986...... in creating the typologically characteristic syllable structure of Danish with extreme sound reductions (Rischel 2003, Basbøll 2005) presenting a challenge to the language acquiring child (Bleses & Basbøll 2004). Building upon the Danish CDI-studies as well as on the Odense Twin Corpus and experimental data...

  17. Lessons learned from oxygen isotopes in modern precipitation applied to interpretation of speleothem records of paleoclimate from eastern Asia

    Science.gov (United States)

    Dayem, Katherine E.; Molnar, Peter; Battisti, David S.; Roe, Gerard H.

    2010-06-01

    Variability in oxygen isotope ratios collected from speleothems in Chinese caves is often interpreted as a proxy for variability of precipitation, summer precipitation, seasonality of precipitation, and/or the proportion of 18O to 16O of annual total rainfall that is related to a strengthening or weakening of the East Asian monsoon and, in some cases, to the Indian monsoon. We use modern reanalysis and station data to test whether precipitation and temperature variability over China can be related to changes in climate in these distant locales. We find that annual and rainy season precipitation totals in each of central China, south China, and east India have correlation length scales of ∼ 500 km, shorter than the distance between many speleothem records that share similar long-term time variations in δ18O values. Thus the short distances of correlation do not support, though by themselves cannot refute, the idea that apparently synchronous variations in δ18O values at widely spaced (> 500 km) caves in China are due to variations in annual precipitation amounts. We also evaluate connections between climate variables and δ18O values using available instrumental measurements of δ18O values in precipitation. These data, from stations in the Global Network of Isotopes in Precipitation (GNIP), show that monthly δ18O values generally do not correlate well with either local precipitation amount or local temperature, and the degree to which monthly δ18O values do correlate with them varies from station to station. For the few locations that do show significant correlations between δ18O values and precipitation amount, we estimate the differences in precipitation amount that would be required to account for peak-to-peak differences in δ18O values in the speleothems from Hulu and Dongge caves, assuming that δ18O scales with the monthly amount of precipitation or with seasonal differences in precipitation. Insofar as the present-day relationship between δ18O

  18. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  19. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  20. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    2010-01-01

    The aim of this article is to shed light on a small part of the research taking place in the textile field. The article describes an ongoing PhD research project on textiles and sound and outlines the project's two main questions: how sound can be shaped by textiles and conversely how textiles can...

  1. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  2. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  3. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  4. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  5. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  6. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  7. Wearable Eating Habit Sensing System Using Internal Body Sound

    Science.gov (United States)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  8. Sounds of Science

    Science.gov (United States)

    Lott, Kimberly; Lott, Alan; Ence, Hannah

    2018-01-01

    Inquiry-based active learning in science is helpful to all students but especially to those who have a hearing loss. For many deaf or hard of hearing students, the English language may be their second language, with American Sign Language (ASL) being their primary language. Therefore, many of the accommodations for the deaf are similar to those…

  9. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  10. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  11. Atmospheric limb sounding with imaging FTS

    Science.gov (United States)

    Friedl-Vallon, Felix; Riese, Martin; Preusse, Peter; Oelhaf, Hermann; Fischer, Herbert

    Imaging Fourier transform spectrometers in the thermal infrared are a promising new class of sensors for atmospheric science. The availability of fast and sensitive large focal plane arrays with appropriate spectral coverage in the infrared region allows the conception and construction of innovative sensors for Nadir and Limb geometry. Instruments in Nadir geometry have already reached prototype status (e.g. Geostationary Imaging Fourier Transform Spectrometer / U. Wisconsin and NASA) or are in Phase A study (infrared sounding mission on Meteosat third generation / ESA and EUMETSAT). The first application of the new technical possibilities to atmospheric limb sounding from space, the Imaging Michelson Interferometer for Passive Atmospheric Sounding (IMIPAS), is currently studied by industry in the context of preparatory work for the next set of ESA earth explorers. The scientific focus of the instrument is on the processes controlling the composition of the mid/upper troposphere and lower stratosphere. The instrument concept of IMIPAS has been conceived at the research centres Karlsruhe and J¨lich. The development of a precursor instrument (GLORIA-AB) at these research institutions u started already in 2005. The instrument will be able to fly on board of various airborne platforms. First scientific missions are planned for the second half of the year 2009 on board the new German research aircraft HALO. This airborne sensor serves its own scientific purpose, but it also provides a test bed to learn about this new instrument class and its peculiarities and to learn to exploit and interpret the wealth of information provided by a limb imaging IR Fourier transform spectrometer. The presentation will discuss design considerations and challenges for GLORIA-AB and put them in the context of the planned satellite application. It will describe the solutions found, present first laboratory figures of merit for the prototype instrument and outline the new scientific

  12. Using science soundly: The Yucca Mountain standard

    International Nuclear Information System (INIS)

    Fri, R.W.

    1995-01-01

    Using sound science to shape government regulation is one of the most hotly argued topics in the ongoing debate about regulatory reform. Even though no one advaocates using unsound science, the belief that even the best science will sweep away regulatory controversy is equally foolish. As chair of a National Research Council (NRC) committee that studied the scientific basis for regulating high-level nuclear waste disposal, the author learned that science alone could resolve few of the key regulatory questions. Developing a standard that specifies a socially acceptable limit on the human health effects of nuclear waste releases involves many decisions. As the NRC committee learned in evaluating the scientific basis for the Yucca Mountain standard, a scientifically best decision rarely exists. More often, science can only offer a useful framework and starting point for policy debates. And sometimes, science's most helpful contribution is to admit that it has nothing to say. The Yucca mountain study clearly illustrates that excessive faith in the power of science is more likely to produce messy frustration than crisp decisions. A better goal for regulatory reform is the sound use of science to clarify and contain the inevitable policy controversy

  13. Beaches and Bluffs of Puget Sound and the Northern Straits

    Science.gov (United States)

    2007-04-01

    sand up to pebbles, cobbles, and occasionally boulders, often also containing shelly material. Puget Sound beaches commonly have two distinct...very limited historic wind records (wave hind- casting ). Drift directions indicated in the Atlas have repeatedly been proven inaccurate (Johannessen

  14. Evoked responses to sinusoidally modulated sound in unanaesthetized dogs

    NARCIS (Netherlands)

    Tielen, A.M.; Kamp, A.; Lopes da Silva, F.H.; Reneau, J.P.; Storm van Leeuwen, W.

    1. 1. Responses evoked by sinusoidally amplitude-modulated sound in unanaesthetized dogs have been recorded from inferior colliculus and from auditory cortex structures by means of chronically indwelling stainless steel wire electrodes. 2. 2. Harmonic analysis of the average responses demonstrated

  15. The effect of sound sources on soundscape appraisal

    NARCIS (Netherlands)

    van den Bosch, Kirsten; Andringa, Tjeerd

    2014-01-01

    In this paper we explore how the perception of sound sources (like traffic, birds, and the presence of distant people) influences the appraisal of soundscapes (as calm, lively, chaotic, or boring). We have used 60 one-minute recordings, selected from 21 days (502 hours) in March and July 2010.

  16. Completely reproducible description of digital sound data with cellular automata

    International Nuclear Information System (INIS)

    Wada, Masato; Kuroiwa, Jousuke; Nara, Shigetoshi

    2002-01-01

    A novel method of compressive and completely reproducible description of digital sound data by means of rule dynamics of CA (cellular automata) is proposed. The digital data of spoken words and music recorded with the standard format of a compact disk are reproduced completely by this method with use of only two rules in a one-dimensional CA without loss of information

  17. Lung function interpolation by analysis of means of neural-network-supported respiration sounds

    NARCIS (Netherlands)

    Oud, M

    Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages

  18. Understanding the Doppler Effect by Analysing Spectrograms of the Sound of a Passing Vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-01-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a…

  19. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    Science.gov (United States)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  20. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  1. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  2. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  3. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    Science.gov (United States)

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  4. An Exceptional Purity of Sound: Noise Reduction Technology and the Inevitable Noise of Sound Recording

    NARCIS (Netherlands)

    Kromhout, M.

    2014-01-01

    The phenomenon of noise has resisted many attempts at framing it within a singular conceptual framework. Critically questioning the tendency to do so, this article asserts the complexities of different noise-phenomena by analysing a specific technology: technological noise reduction systems. Whereas

  5. Basic live sound reinforcement a practical guide for starting live audio

    CERN Document Server

    Biederman, Raven

    2013-01-01

    Access and interpret manufacturer spec information, find shortcuts for plotting measure and test equations, and learn how to begin your journey towards becoming a live sound professional. Land and perform your first live sound gigs with this guide that gives you just the right amount of information. Don't get bogged down in details intended for complex and expensive equipment and Madison Square Garden-sized venues. Basic Live Sound Reinforcement is a handbook for audio engineers and live sound enthusiasts performing in small venues from one-mike coffee shops to clubs. With their combined ye

  6. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  7. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  8. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  9. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  10. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  11. An Integrated Approach to Motion and Sound

    National Research Council Canada - National Science Library

    Hahn, James K; Geigel, Joe; Lee, Jong W; Gritz, Larry; Takala, Tapio; Mishra, Suneil

    1995-01-01

    Until recently, sound has been given little attention in computer graphics and related domains of computer animation and virtual environments, although sounds which are properly synchronized to motion...

  12. THE INTONATION AND SOUND CHARACTERISTICS OF ADVERTISING PRONUNCIATION STYLE

    Directory of Open Access Journals (Sweden)

    Chernyavskaya Elena Sergeevna

    2014-06-01

    Full Text Available The article aims at describing the intonation and sound characteristics of advertising phonetic style. On the basis of acoustic analysis of transcripts of radio advertising tape recordings, broadcasted at different radio stations, as well as in the result of processing the representative part of phrases with the help of special computer programs, the author determines the parameters of superfix means. The article proves that the stylistic parameters of advertising phonetic style are oriented on modern orthoepy, and that the originality of radio advertising sounding is determined by two tendencies – the reduction of stressed vowels duration in the terminal and non-terminal word and the increase of pre-tonic and post-tonic vowels duration of non-terminal word in a phrase. The article also shows that the peculiarity of rhythmic structure of terminal and non-terminal word in radio advertising is formed by means of leveling stressed and unstressed sounds in length. The specificity of intonational structure of an advertising text consists in the following peculiarities: matching of syntactic and syntagmatic division, which allows to denote the blocks of semantic models, forming the text of radio advertising; the allocation of keywords into separate syntagmas; the design of informative parts of advertising text by means of symmetric length correlation of minimal speech segments; the combination of interstyle prosodic elements in the framework of sounding text. Thus, the conducted analysis allowed to conclude, that the texts of sounding advertising are designed using special pronunciation style, marked by sound duration.

  13. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  14. Xinyinqin: a computer-based heart sound simulator.

    Science.gov (United States)

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  15. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  16. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  17. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  18. A single episode of high intensity sound inhibits long-term potentiation in the hippocampus of rats.

    Science.gov (United States)

    de Deus, J L; Cunha, A O S; Terzian, A L; Resstel, L B; Elias, L L K; Antunes-Rodrigues, J; Almeida, S S; Leão, R M

    2017-10-26

    Exposure to loud sounds has become increasingly common. The most common consequences of loud sound exposure are deafness and tinnitus, but emotional and cognitive problems are also associated with loud sound exposure. Loud sounds can activate the hipothalamic-pituitary-adrenal axis resulting in the secretion of corticosterone, which affects hippocampal synaptic plasticity. Previously we have shown that long-term exposure to short episodes of high intensity sound inhibited hippocampal long-term potentiation (LTP) without affecting spatial learning and memory. Here we aimed to study the impact of short term loud sound exposure on hippocampal synaptic plasticity and function. We found that a single minute of 110 dB sound inhibits hippocampal Schaffer-CA1 LTP for 24 hours. This effect did not occur with an 80-dB sound exposure, was not correlated with corticosterone secretion and was also observed in the perforant-dentate gyrus synapse. We found that despite the deficit in the LTP these animals presented normal spatial learning and memory and fear conditioning. We conclude that a single episode of high-intensity sound impairs hippocampal LTP, without impairing memory and learning. Our results show that the hippocampus is very responsive to loud sounds which can have a potential, but not yet identified, impact on its function.

  19. Computerised Analysis of Telemonitored Respiratory Sounds for Predicting Acute Exacerbations of COPD.

    Science.gov (United States)

    Fernandez-Granero, Miguel Angel; Sanchez-Morillo, Daniel; Leon-Jimenez, Antonio

    2015-10-23

    Chronic obstructive pulmonary disease (COPD) is one of the commonest causes of death in the world and poses a substantial burden on healthcare systems and patients' quality of life. The largest component of the related healthcare costs is attributable to admissions due to acute exacerbation (AECOPD). The evidence that might support the effectiveness of the telemonitoring interventions in COPD is limited partially due to the lack of useful predictors for the early detection of AECOPD. Electronic stethoscopes and computerised analyses of respiratory sounds (CARS) techniques provide an opportunity for substantial improvement in the management of respiratory diseases. This exploratory study aimed to evaluate the feasibility of using: (a) a respiratory sensor embedded in a self-tailored housing for ageing users; (b) a telehealth framework; (c) CARS and (d) machine learning techniques for the remote early detection of the AECOPD. In a 6-month pilot study, 16 patients with COPD were equipped with a home base-station and a sensor to daily record their respiratory sounds. Principal component analysis (PCA) and a support vector machine (SVM) classifier was designed to predict AECOPD. 75.8% exacerbations were early detected with an average of 5 ± 1.9 days in advance at medical attention. The proposed method could provide support to patients, physicians and healthcare systems.

  20. Computerised Analysis of Telemonitored Respiratory Sounds for Predicting Acute Exacerbations of COPD

    Directory of Open Access Journals (Sweden)

    Miguel Angel Fernandez-Granero

    2015-10-01

    Full Text Available Chronic obstructive pulmonary disease (COPD is one of the commonest causes of death in the world and poses a substantial burden on healthcare systems and patients’ quality of life. The largest component of the related healthcare costs is attributable to admissions due to acute exacerbation (AECOPD. The evidence that might support the effectiveness of the telemonitoring interventions in COPD is limited partially due to the lack of useful predictors for the early detection of AECOPD. Electronic stethoscopes and computerised analyses of respiratory sounds (CARS techniques provide an opportunity for substantial improvement in the management of respiratory diseases. This exploratory study aimed to evaluate the feasibility of using: (a a respiratory sensor embedded in a self-tailored housing for ageing users; (b a telehealth framework; (c CARS and (d machine learning techniques for the remote early detection of the AECOPD. In a 6-month pilot study, 16 patients with COPD were equipped with a home base-station and a sensor to daily record their respiratory sounds. Principal component analysis (PCA and a support vector machine (SVM classifier was designed to predict AECOPD. 75.8% exacerbations were early detected with an average of 5 ± 1.9 days in advance at medical attention. The proposed method could provide support to patients, physicians and healthcare systems.

  1. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    Science.gov (United States)

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  3. 27 CFR 9.151 - Puget Sound.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Puget Sound. 9.151 Section... Sound. (a) Name. The name of the viticultural area described in this section is “Puget Sound.” (b) Approved maps. The appropriate maps for determining the boundary of the Puget Sound viticultural area are...

  4. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  5. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.

    Science.gov (United States)

    Ching, Siok Siong; Tan, Yih Kai

    2012-09-07

    To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen

  6. Combined Amplification and Sound Generation for Tinnitus: A Scoping Review.

    Science.gov (United States)

    Tutaj, Lindsey; Hoare, Derek J; Sereda, Magdalena

    In most cases, tinnitus is accompanied by some degree of hearing loss. Current tinnitus management guidelines recognize the importance of addressing hearing difficulties, with hearing aids being a common option. Sound therapy is the preferred mode of audiological tinnitus management in many countries, including in the United Kingdom. Combination instruments provide a further option for those with an aidable hearing loss, as they combine amplification with a sound generation option. The aims of this scoping review were to catalog the existing body of evidence on combined amplification and sound generation for tinnitus and consider opportunities for further research or evidence synthesis. A scoping review is a rigorous way to identify and review an established body of knowledge in the field for suggestive but not definitive findings and gaps in current knowledge. A wide variety of databases were used to ensure that all relevant records within the scope of this review were captured, including gray literature, conference proceedings, dissertations and theses, and peer-reviewed articles. Data were gathered using scoping review methodology and consisted of the following steps: (1) identifying potentially relevant records; (2) selecting relevant records; (3) extracting data; and (4) collating, summarizing, and reporting results. Searches using 20 different databases covered peer-reviewed and gray literature and returned 5959 records. After exclusion of duplicates and works that were out of scope, 89 records remained for further analysis. A large number of records identified varied considerably in methodology, applied management programs, and type of devices. There were significant differences in practice between different countries and clinics regarding candidature and fitting of combination aids, partly driven by the application of different management programs. Further studies on the use and effects of combined amplification and sound generation for tinnitus are

  7. Integrated wireless fast-scan cyclic voltammetry recording and electrical stimulation for reward-predictive learning in awake, freely moving rats

    Science.gov (United States)

    Li, Yu-Ting; Wickens, Jeffery R.; Huang, Yi-Ling; Pan, Wynn H. T.; Chen, Fu-Yu Beverly; Chen, Jia-Jin Jason

    2013-08-01

    Objective. Fast-scan cyclic voltammetry (FSCV) is commonly used to monitor phasic dopamine release, which is usually performed using tethered recording and for limited types of animal behavior. It is necessary to design a wireless dopamine sensing system for animal behavior experiments. Approach. This study integrates a wireless FSCV system for monitoring the dopamine signal in the ventral striatum with an electrical stimulator that induces biphasic current to excite dopaminergic neurons in awake freely moving rats. The measured dopamine signals are unidirectionally transmitted from the wireless FSCV module to the host unit. To reduce electrical artifacts, an optocoupler and a separate power are applied to isolate the FSCV system and electrical stimulator, which can be activated by an infrared controller. Main results. In the validation test, the wireless backpack system has similar performance in comparison with a conventional wired system and it does not significantly affect the locomotor activity of the rat. In the cocaine administration test, the maximum electrically elicited dopamine signals increased to around 230% of the initial value 20 min after the injection of 10 mg kg-1 cocaine. In a classical conditioning test, the dopamine signal in response to a cue increased to around 60 nM over 50 successive trials while the electrically evoked dopamine concentration decreased from about 90 to 50 nM in the maintenance phase. In contrast, the cue-evoked dopamine concentration progressively decreased and the electrically evoked dopamine was eliminated during the extinction phase. In the histological evaluation, there was little damage to brain tissue after five months chronic implantation of the stimulating electrode. Significance. We have developed an integrated wireless voltammetry system for measuring dopamine concentration and providing electrical stimulation. The developed wireless FSCV system is proven to be a useful experimental tool for the continuous

  8. Sound engineering for diesel engines; Sound Engineering an Dieselmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Enderich, A.; Fischer, R. [MAHLE Filtersysteme GmbH, Stuttgart (Germany)

    2006-07-01

    The strong acceptance for vehicles powered by turbo-charged diesel engines encourages several manufacturers to think about sportive diesel concepts. The approach of suppressing unpleasant noise by the application of distinctive insulation steps is not adequate to satisfy sportive needs. The acoustics cannot follow the engine's performance. This report documents, that it is possible to give diesel-powered vehicles a sportive sound characteristic by using an advanced MAHLE motor-sound-system with a pressure-resistant membrane and an integrated load controlled flap. With this the specific acoustic disadvantages of the diesel engine, like the ''diesel knock'' or a rough engine running can be masked. However, by the application of a motor-sound-system you must not negate the original character of the diesel engine concept, but accentuate its strong torque characteristic in the middle engine speed range. (orig.)

  9. Sound field separation with sound pressure and particle velocity measurements

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-01-01

    separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field...

  10. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  11. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  12. Subjective evaluation of restaurant acoustics in a virtual sound environment

    DEFF Research Database (Denmark)

    Nielsen, Nicolaj Østergaard; Marschall, Marton; Santurette, Sébastien

    2016-01-01

    Many restaurants have smooth rigid surfaces made of wood, steel, glass, and concrete. This often results in a lack of sound absorption. Such restaurants are notorious for high sound noise levels during service that most owners actually desire for representing vibrant eating environments, although...... surveys report that noise complaints are on par with poor service. This study investigated the relation between objective acoustic parameters and subjective evaluation of acoustic comfort at five restaurants in terms of three parameters: noise annoyance, speech intelligibility, and privacy. At each...... location, customers filled out questionnaire surveys, acoustic parameters were measured, and recordings of restaurant acoustic scenes were obtained with a 64-channel spherical array. The acoustic scenes were reproduced in a virtual sound environment (VSE) with 64 loudspeakers placed in an anechoic room...

  13. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Mobile sound: media art in hybrid spaces

    OpenAIRE

    Behrendt, Frauke

    2010-01-01

    The thesis explores the relationships between sound and mobility through an examination\\ud of sound art. The research engages with the intersection of sound, mobility and\\ud art through original empirical work and theoretically through a critical engagement with\\ud sound studies. In dialogue with the work of De Certeau, Lefebvre, Huhtamo and Habermas\\ud in terms of the poetics of walking, rhythms, media archeology and questions of\\ud publicness, I understand sound art as an experimental mobil...

  15. SCORE - Sounding-rocket Coronagraphic Experiment

    Science.gov (United States)

    Fineschi, Silvano; Moses, Dan; Romoli, Marco

    The Sounding-rocket Coronagraphic Experiment - SCORE - is a The Sounding-rocket Coronagraphic Experiment - SCORE - is a coronagraph for multi-wavelength imaging of the coronal Lyman-alpha lines, HeII 30.4 nm and HI 121.6 nm, and for the broad.band visible-light emission of the polarized K-corona. SCORE has flown successfully in 2009 acquiring the first images of the HeII line-emission from the extended corona. The simultaneous observation of the coronal Lyman-alpha HI 121.6 nm, has allowed the first determination of the absolute helium abundance in the extended corona. This presentation will describe the lesson learned from the first flight and will illustrate the preparations and the science perspectives for the second re-flight approved by NASA and scheduled for 2016. The SCORE optical design is flexible enough to be able to accommodate different experimental configurations with minor modifications. This presentation will describe one of such configurations that could include a polarimeter for the observation the expected Hanle effect in the coronal Lyman-alpha HI line. The linear polarization by resonance scattering of coronal permitted line-emission in the ultraviolet (UV) can be modified by magnetic fields through the Hanle effect. Thus, space-based UV spectro-polarimetry would provide an additional new tool for the diagnostics of coronal magnetism.

  16. Multidimensionality of Teachers' Graded Responses for Preschoolers' Stylistic Learning Behavior: The Learning-to-Learn Scales

    Science.gov (United States)

    McDermott, Paul A.; Fantuzzo, John W.; Warley, Heather P.; Waterman, Clare; Angelo, Lauren E.; Gadsden, Vivian L.; Sekino, Yumiko

    2011-01-01

    Assessment of preschool learning behavior has become very popular as a mechanism to inform cognitive development and promote successful interventions. The most widely used measures offer sound predictions but distinguish only a few types of stylistic learning and lack sensitive growth detection. The Learning-to-Learn Scales was designed to…

  17. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  18. Sounding the field: recent works in sound studies.

    Science.gov (United States)

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  19. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  20. Learning

    Directory of Open Access Journals (Sweden)

    Mohsen Laabidi

    2014-01-01

    Full Text Available Nowadays learning technologies transformed educational systems with impressive progress of Information and Communication Technologies (ICT. Furthermore, when these technologies are available, affordable and accessible, they represent more than a transformation for people with disabilities. They represent real opportunities with access to an inclusive education and help to overcome the obstacles they met in classical educational systems. In this paper, we will cover basic concepts of e-accessibility, universal design and assistive technologies, with a special focus on accessible e-learning systems. Then, we will present recent research works conducted in our research Laboratory LaTICE toward the development of an accessible online learning environment for persons with disabilities from the design and specification step to the implementation. We will present, in particular, the accessible version “MoodleAcc+” of the well known e-learning platform Moodle as well as new elaborated generic models and a range of tools for authoring and evaluating accessible educational content.

  1. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  2. Moth hearing and sound communication

    DEFF Research Database (Denmark)

    Nakano, Ryo; Takanashi, Takuma; Surlykke, Annemarie

    2015-01-01

    Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced by compar......Active echolocation enables bats to orient and hunt the night sky for insects. As a counter-measure against the severe predation pressure many nocturnal insects have evolved ears sensitive to ultrasonic bat calls. In moths bat-detection was the principal purpose of hearing, as evidenced...... by comparable hearing physiology with best sensitivity in the bat echolocation range, 20–60 kHz, across moths in spite of diverse ear morphology. Some eared moths subsequently developed sound-producing organs to warn/startle/jam attacking bats and/or to communicate intraspecifically with sound. Not only...... the sounds for interaction with bats, but also mating signals are within the frequency range where bats echolocate, indicating that sound communication developed after hearing by “sensory exploitation”. Recent findings on moth sound communication reveal that close-range (~ a few cm) communication with low...

  3. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    Directory of Open Access Journals (Sweden)

    Loes J Bolle

    Full Text Available In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2 (zero to peak pressures up to 32 kPa and single pulse sound exposure levels up to 186 dB re 1µPa(2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  4. Ultrahromatizm as a Sound Meditation

    Directory of Open Access Journals (Sweden)

    Zaytseva Marina

    2016-08-01

    Full Text Available The article scientifically substantiates the insights on the theory and the practice of using microchromatic in modern musical art, defines compositional and expressive possibilities of microtonal system in the works of composers of XXI century. It justifies the author's interpretation of the concept of “ultrahromatizm”, as a principle of musical thinking, which is connected with the sound space conception as the space-time continuum. The paper identifies the correlation of the notions “microchromatism” and “ultrahromatizm”. If microchromosome is understood, first and for most, as the technique of dividing the sound into microparticles, ultrahromatizm is interpreted as the principle of musical and artistic consciousness, as the musical focus of consciousness on the formation of the specific model of sound meditation and understanding of the world.

  5. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  6. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  7. The Changing Role of Sound-Symbolism for Small Versus Large Vocabularies.

    Science.gov (United States)

    Brand, James; Monaghan, Padraic; Walker, Peter

    2017-12-12

    Natural language contains many examples of sound-symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound-symbolism for word learning as vocabulary size varies. Participants learned form-meaning mappings for words which were either congruent or incongruent with regard to sound-symbolic relations. For the smaller vocabulary, sound-symbolism facilitated learning individual words, whereas for larger vocabularies sound-symbolism supported learning category distinctions. The changing properties of form-meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  8. Urban Noise Recorded by Stationary Monitoring Stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir

    2017-10-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.

  9. 非结构化电子病历关系抽取的机器学习%Machine Learning From Non-structured Electronic Medical Record on Relation Extraction

    Institute of Scientific and Technical Information of China (English)

    倪晓华

    2017-01-01

    Objective:Implementation of electronic medical records (EMR) of the relation extraction in the field of machine learning.Methods:Using the GATE of the application components to deal with the Batch learning PR.Results:The result of relation extraction from machine learning meets the expected requirements and has good practicability.Conclusion:Using learning PR Batch can quickly and automatically get the required relation information in the EMR large section of the article.%目的:实现电子病历(EMR)病程记录中关系抽取的机器学习.方法:利用文本工程通用框架(GATE)的应用实例组件批处理学习进程资源(Batch learning Process Resource)进行机器学习.结果:机器学习关系抽取的结果符合预期要求,具有较好的实用性.结论:利用Batch learning PR可以在EMR大段文章中快速自动获取所需的关系信息.

  10. Making sound vortices by metasurfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Liping; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang; Tang, Kun; Ke, Manzhu; Peng, Shasha [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Jia, Han [State Key Laboratory of Acoustics and Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190 (China); Liu, Zhengyou [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Institute for Advanced Studies, Wuhan University, Wuhan 430072 (China)

    2016-08-15

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  11. Antenna for Ultrawideband Channel Sounding

    DEFF Research Database (Denmark)

    Zhekov, Stanislav Stefanov; Tatomirescu, Alexandru; Pedersen, Gert F.

    2016-01-01

    A novel compact antenna for ultrawideband channel sounding is presented. The antenna is composed of a symmetrical biconical antenna modified by adding a cylinder and a ring to each cone. A feeding coaxial cable is employed during the simulations in order to evaluate and reduce its impact on the a......A novel compact antenna for ultrawideband channel sounding is presented. The antenna is composed of a symmetrical biconical antenna modified by adding a cylinder and a ring to each cone. A feeding coaxial cable is employed during the simulations in order to evaluate and reduce its impact...

  12. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  13. The Multisensory Sound Lab: Sounds You Can See and Feel.

    Science.gov (United States)

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  14. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  15. Offshore dredger sounds: Source levels, sound maps, and risk assessment

    NARCIS (Netherlands)

    Jong, C.A.F. de; Ainslie, M.A.; Heinis, F.; Janmaat, J.

    2016-01-01

    The underwater sound produced during construction of the Port of Rotterdam harbor extension (Maasvlakte 2) was measured, with emphasis on the contribution of the trailing suction hopper dredgers during their various activities: dredging, transport, and discharge of sediment. Measured source levels

  16. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  17. Sound Exposure of Symphony Orchestra Musicians

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Pedersen, Ellen Raben; Juhl, Peter Møller

    2011-01-01

    dBA and their left ear was exposed 4.6 dB more than the right ear. Percussionists were exposed to high sound peaks >115 dBC but less continuous sound exposure was observed in this group. Musicians were exposed up to LAeq8h of 92 dB and a majority of musicians were exposed to sound levels exceeding......Background: Assessment of sound exposure by noise dosimetry can be challenging especially when measuring the exposure of classical orchestra musicians where sound originate from many different instruments. A new measurement method of bilateral sound exposure of classical musicians was developed...... and used to characterize sound exposure of the left and right ear simultaneously in two different symphony orchestras.Objectives: To measure binaural sound exposure of professional classical musicians and to identify possible exposure risk factors of specific musicians.Methods: Sound exposure was measured...

  18. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  19. Content Analysis Study of E-Learning Literature Based on Scopus Record through 2013: With a Focus on the Place of Iran's Productions

    Science.gov (United States)

    Asadzandi, Shadi; Rakhshani, Tayebeh; Mohammadi, Aeen

    2017-01-01

    Background: Topic of e-learning and virtual university in recent years is one of the important applications of information and communication technology in the world and most famous universities in the field of education development have done important steps. For as much as the importance of learning and development in every community, and to keep…

  20. The Flooding of Long Island Sound

    Science.gov (United States)

    Thomas, E.; Varekamp, J. C.; Lewis, R. S.

    2007-12-01

    Between the Last Glacial Maximum (22-19 ka) and the Holocene (10 ka) regions marginal to the Laurentide Ice Sheets saw complex environmental changes from moraines to lake basins to dry land to estuaries and marginal ocean basins, as a result of the interplay between the topography of moraines formed at the maximum extent and during stages of the retreat of the ice sheet, regional glacial rebound, and global eustatic sea level rise. In New England, the history of deglaciation and relative sea level rise has been studied extensively, and the sequence of events has been documented in detail. The Laurentide Ice Sheet reached its maximum extent (Long Island) at 21.3-20.4 ka according to radiocarbon dating (calibrated ages), 19.0-18.4 ka according to radionuclide dating. Periglacial Lake Connecticut formed behind the moraines in what is now the Long Island Sound Basin. The lake drained through the moraine at its eastern end. Seismic records show that a fluvial system was cut into the exposed lake beds, and a wave-cut unconformity was produced during the marine flooding, which has been inferred to have occurred at about 15.5 ka (Melt Water Pulse 1A) through correlation with dated events on land. Vibracores from eastern Long Island Sound penetrate the unconformity and contain red, varved lake beds overlain by marine grey sands and silts with a dense concentration of oysters in life position above the erosional contact. The marine sediments consist of intertidal to shallow subtidal deposits with oysters, shallow-water foraminifera and litoral diatoms, overlain by somewhat laminated sandy silts, in turn overlain by coarser-grained, sandy to silty sediments with reworked foraminifera and bivalve fragments. The latter may have been deposited in a sand-wave environment as present today at the core locations. We provide direct age control of the transgression with 30 radiocarbon dates on oysters, and compared the ages with those obtained on macrophytes and bulk organic carbon in